I0512 09:55:42.869414 6 e2e.go:243] Starting e2e run "bfa7519f-1832-4aff-9b5c-2e82adb5f460" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589277341 - Will randomize all specs Will run 215 of 4412 specs May 12 09:55:43.050: INFO: >>> kubeConfig: /root/.kube/config May 12 09:55:43.055: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 09:55:43.079: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 09:55:43.110: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 09:55:43.110: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 09:55:43.110: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 09:55:43.117: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 09:55:43.117: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 09:55:43.117: INFO: e2e test version: v1.15.11 May 12 09:55:43.119: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:55:43.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 12 09:55:43.307: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 09:55:49.859: INFO: Successfully updated pod "labelsupdate4b9414d2-b77c-481d-b86d-0b698714f10a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:55:52.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5266" for this suite. May 12 09:56:14.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:14.107: INFO: namespace projected-5266 deletion completed in 22.099106616s • [SLOW TEST:30.988 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:56:14.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 12 09:56:14.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 12 09:56:14.516: INFO: stderr: "" May 12 09:56:14.516: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:56:14.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4513" for this suite. May 12 09:56:20.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:20.772: INFO: namespace kubectl-4513 deletion completed in 6.233454202s • [SLOW TEST:6.663 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:56:20.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 12 09:56:20.851: INFO: Waiting up to 5m0s for pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6" in namespace "emptydir-7407" to be "success or failure" May 12 09:56:20.853: INFO: Pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310375ms May 12 09:56:22.858: INFO: Pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006673852s May 12 09:56:24.861: INFO: Pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.009841998s May 12 09:56:26.927: INFO: Pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075719599s STEP: Saw pod success May 12 09:56:26.927: INFO: Pod "pod-dcfa5488-3277-4156-806a-88b5e43132d6" satisfied condition "success or failure" May 12 09:56:26.929: INFO: Trying to get logs from node iruya-worker pod pod-dcfa5488-3277-4156-806a-88b5e43132d6 container test-container: STEP: delete the pod May 12 09:56:27.016: INFO: Waiting for pod pod-dcfa5488-3277-4156-806a-88b5e43132d6 to disappear May 12 09:56:27.154: INFO: Pod pod-dcfa5488-3277-4156-806a-88b5e43132d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:56:27.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7407" for this suite. May 12 09:56:33.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:33.331: INFO: namespace emptydir-7407 deletion completed in 6.17311774s • [SLOW TEST:12.559 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:56:33.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 09:56:34.083: INFO: Waiting up to 5m0s for pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d" in namespace "emptydir-4311" to be "success or failure" May 12 09:56:34.351: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d": Phase="Pending", Reason="", readiness=false. Elapsed: 267.794613ms May 12 09:56:36.354: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271273936s May 12 09:56:38.358: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274970693s May 12 09:56:40.385: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302117241s May 12 09:56:42.388: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.305001941s STEP: Saw pod success May 12 09:56:42.388: INFO: Pod "pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d" satisfied condition "success or failure" May 12 09:56:42.390: INFO: Trying to get logs from node iruya-worker2 pod pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d container test-container: STEP: delete the pod May 12 09:56:42.446: INFO: Waiting for pod pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d to disappear May 12 09:56:42.549: INFO: Pod pod-7eb4f618-0c1a-49ff-8969-79ca7188f26d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:56:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4311" for this suite. May 12 09:56:49.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:56:49.725: INFO: namespace emptydir-4311 deletion completed in 7.171546037s • [SLOW TEST:16.394 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:56:49.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-cd097d96-7e05-48cd-a211-87d5faad75b6 STEP: Creating configMap with name cm-test-opt-upd-9420849a-56de-4a21-9138-3a2e51783832 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cd097d96-7e05-48cd-a211-87d5faad75b6 STEP: Updating configmap cm-test-opt-upd-9420849a-56de-4a21-9138-3a2e51783832 STEP: Creating configMap with name cm-test-opt-create-a64b2807-b72d-4dd8-96c9-0eba3c2f332f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:57:05.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7586" for this suite. May 12 09:57:29.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:57:29.234: INFO: namespace projected-7586 deletion completed in 24.071199699s • [SLOW TEST:39.509 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:57:29.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 09:57:35.641: INFO: 10 pods remaining May 12 09:57:35.641: INFO: 10 pods has nil DeletionTimestamp May 12 09:57:35.641: INFO: May 12 09:57:37.171: INFO: 10 pods remaining May 12 09:57:37.171: INFO: 0 pods has nil DeletionTimestamp May 12 09:57:37.171: INFO: May 12 09:57:39.863: INFO: 0 pods remaining May 12 09:57:39.863: INFO: 0 pods has nil DeletionTimestamp May 12 09:57:39.863: INFO: May 12 09:57:41.366: INFO: 0 pods remaining May 12 09:57:41.366: INFO: 0 pods has nil DeletionTimestamp May 12 09:57:41.366: INFO: May 12 09:57:42.408: INFO: 0 pods remaining May 12 09:57:42.408: INFO: 0 pods has nil DeletionTimestamp May 12 09:57:42.408: INFO: STEP: Gathering metrics W0512 09:57:42.773457 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 09:57:42.773: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:57:42.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4681" for this suite. May 12 09:57:49.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:57:49.282: INFO: namespace gc-4681 deletion completed in 6.50623252s • [SLOW TEST:20.048 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:57:49.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 09:57:49.448: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:57:56.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9272" for this suite. May 12 09:58:02.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:02.520: INFO: namespace init-container-9272 deletion completed in 6.200337048s • [SLOW TEST:13.237 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:58:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 09:58:02.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99" in namespace "downward-api-1975" to be "success or failure" May 12 09:58:02.689: INFO: Pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99": Phase="Pending", Reason="", readiness=false. Elapsed: 26.730361ms May 12 09:58:04.694: INFO: Pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031085083s May 12 09:58:06.696: INFO: Pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033515631s May 12 09:58:08.701: INFO: Pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037918556s STEP: Saw pod success May 12 09:58:08.701: INFO: Pod "downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99" satisfied condition "success or failure" May 12 09:58:08.703: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99 container client-container: STEP: delete the pod May 12 09:58:08.744: INFO: Waiting for pod downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99 to disappear May 12 09:58:08.899: INFO: Pod downwardapi-volume-e49d9a95-d13e-47f2-8d00-df7657f38f99 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:58:08.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1975" for this suite. May 12 09:58:14.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:15.031: INFO: namespace downward-api-1975 deletion completed in 6.128081412s • [SLOW TEST:12.511 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:58:15.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-6585b410-ad46-4acb-bf44-66ca1cfb30d7 STEP: Creating a pod to test consume configMaps May 12 09:58:16.130: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4" in namespace "projected-4537" to be "success or failure" May 12 09:58:16.360: INFO: Pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4": Phase="Pending", Reason="", readiness=false. Elapsed: 229.559814ms May 12 09:58:18.533: INFO: Pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402934912s May 12 09:58:20.540: INFO: Pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409917455s May 12 09:58:22.543: INFO: Pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412908234s STEP: Saw pod success May 12 09:58:22.543: INFO: Pod "pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4" satisfied condition "success or failure" May 12 09:58:22.545: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4 container projected-configmap-volume-test: STEP: delete the pod May 12 09:58:23.013: INFO: Waiting for pod pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4 to disappear May 12 09:58:23.255: INFO: Pod pod-projected-configmaps-4cbaaddb-eeb8-49c1-a1f3-625007aedce4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:58:23.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4537" for this suite. May 12 09:58:29.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:29.346: INFO: namespace projected-4537 deletion completed in 6.087587542s • [SLOW TEST:14.314 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:58:29.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 12 09:58:29.415: INFO: Waiting up to 5m0s for pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405" in namespace "var-expansion-3102" to be "success or failure" May 12 09:58:29.423: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405": Phase="Pending", Reason="", readiness=false. Elapsed: 7.403811ms May 12 09:58:31.426: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010731651s May 12 09:58:33.466: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050597228s May 12 09:58:35.473: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405": Phase="Running", Reason="", readiness=true. Elapsed: 6.057843031s May 12 09:58:37.476: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060814671s STEP: Saw pod success May 12 09:58:37.476: INFO: Pod "var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405" satisfied condition "success or failure" May 12 09:58:37.479: INFO: Trying to get logs from node iruya-worker pod var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405 container dapi-container: STEP: delete the pod May 12 09:58:37.495: INFO: Waiting for pod var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405 to disappear May 12 09:58:37.500: INFO: Pod var-expansion-a9bc93a9-bf98-47fa-842e-fb90a9fac405 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:58:37.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3102" for this suite. May 12 09:58:43.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:58:43.686: INFO: namespace var-expansion-3102 deletion completed in 6.182882242s • [SLOW TEST:14.339 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:58:43.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-579.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-579.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-579.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-579.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-579.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-579.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 09:58:53.940: INFO: DNS probes using dns-579/dns-test-c2fadb83-f2a5-4298-839b-57d95fd58b6a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:58:54.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-579" for this suite. May 12 09:59:02.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:59:02.401: INFO: namespace dns-579 deletion completed in 8.371537352s • [SLOW TEST:18.715 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:59:02.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 12 09:59:02.763: INFO: Waiting up to 5m0s for pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2" in namespace "containers-6310" to be "success or failure" May 12 09:59:02.801: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.293564ms May 12 09:59:04.805: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042278979s May 12 09:59:06.808: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044858417s May 12 09:59:08.811: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2": Phase="Running", Reason="", readiness=true. Elapsed: 6.047929473s May 12 09:59:10.814: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051128066s STEP: Saw pod success May 12 09:59:10.814: INFO: Pod "client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2" satisfied condition "success or failure" May 12 09:59:10.816: INFO: Trying to get logs from node iruya-worker pod client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2 container test-container: STEP: delete the pod May 12 09:59:11.015: INFO: Waiting for pod client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2 to disappear May 12 09:59:11.077: INFO: Pod client-containers-852f007a-a28c-4f84-b71c-899cbc2332e2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:59:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6310" for this suite. May 12 09:59:17.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:59:17.580: INFO: namespace containers-6310 deletion completed in 6.466720951s • [SLOW TEST:15.179 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:59:17.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 09:59:48.200212 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 09:59:48.200: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 09:59:48.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6481" for this suite. May 12 09:59:56.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:59:56.478: INFO: namespace gc-6481 deletion completed in 8.275867518s • [SLOW TEST:38.898 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 09:59:56.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-192bb6be-16b9-4fc6-a5a2-ccfde83cde24 STEP: Creating a pod to test consume configMaps May 12 09:59:57.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd" in namespace "projected-6105" to be "success or failure" May 12 09:59:57.138: INFO: Pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 27.916679ms May 12 09:59:59.272: INFO: Pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161770999s May 12 10:00:01.275: INFO: Pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd": Phase="Running", Reason="", readiness=true. Elapsed: 4.165214929s May 12 10:00:03.279: INFO: Pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169235156s STEP: Saw pod success May 12 10:00:03.279: INFO: Pod "pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd" satisfied condition "success or failure" May 12 10:00:03.282: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd container projected-configmap-volume-test: STEP: delete the pod May 12 10:00:03.332: INFO: Waiting for pod pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd to disappear May 12 10:00:03.433: INFO: Pod pod-projected-configmaps-4b9bfa0f-913e-4e9f-a7d1-9cab766a0ebd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:00:03.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6105" for this suite. May 12 10:00:09.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:00:09.717: INFO: namespace projected-6105 deletion completed in 6.280149092s • [SLOW TEST:13.238 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:00:09.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:00:09.792: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 12 10:00:11.900: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:00:13.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8139" for this suite. May 12 10:00:19.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:00:19.578: INFO: namespace replication-controller-8139 deletion completed in 6.501279644s • [SLOW TEST:9.860 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:00:19.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1200 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1200 STEP: Creating statefulset with conflicting port in namespace statefulset-1200 STEP: Waiting until pod test-pod will start running in namespace statefulset-1200 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1200 May 12 10:00:25.688: INFO: Observed stateful pod in namespace: statefulset-1200, name: ss-0, uid: 490c6b0b-846f-4482-b62b-df12b13f0ecf, status phase: Pending. Waiting for statefulset controller to delete. May 12 10:00:32.146: INFO: Observed stateful pod in namespace: statefulset-1200, name: ss-0, uid: 490c6b0b-846f-4482-b62b-df12b13f0ecf, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:00:32.470: INFO: Observed stateful pod in namespace: statefulset-1200, name: ss-0, uid: 490c6b0b-846f-4482-b62b-df12b13f0ecf, status phase: Failed. Waiting for statefulset controller to delete. May 12 10:00:32.471: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1200 STEP: Removing pod with conflicting port in namespace statefulset-1200 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1200 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 10:00:42.821: INFO: Deleting all statefulset in ns statefulset-1200 May 12 10:00:42.824: INFO: Scaling statefulset ss to 0 May 12 10:00:52.843: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:00:52.845: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:00:52.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1200" for this suite. May 12 10:00:59.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:00:59.248: INFO: namespace statefulset-1200 deletion completed in 6.378333759s • [SLOW TEST:39.670 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:00:59.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-432dad88-39ee-477e-a182-e22ff1a8790a STEP: Creating secret with name s-test-opt-upd-c307b0ac-0942-4686-a4ae-50fad568f0df STEP: Creating the pod STEP: Deleting secret s-test-opt-del-432dad88-39ee-477e-a182-e22ff1a8790a STEP: Updating secret s-test-opt-upd-c307b0ac-0942-4686-a4ae-50fad568f0df STEP: Creating secret with name s-test-opt-create-d3cf745a-a633-4ba3-a471-8b1b4df9096f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:01:10.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2834" for this suite. May 12 10:01:34.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:01:34.361: INFO: namespace projected-2834 deletion completed in 24.288008556s • [SLOW TEST:35.113 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:01:34.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee May 12 10:01:35.050: INFO: Pod name my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee: Found 0 pods out of 1 May 12 10:01:40.075: INFO: Pod name my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee: Found 1 pods out of 1 May 12 10:01:40.075: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee" are running May 12 10:01:40.142: INFO: Pod "my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee-lk852" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:01:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:01:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:01:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 10:01:35 +0000 UTC Reason: Message:}]) May 12 10:01:40.142: INFO: Trying to dial the pod May 12 10:01:45.151: INFO: Controller my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee: Got expected result from replica 1 [my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee-lk852]: "my-hostname-basic-b3678dc4-ca47-4fec-bc4b-33ae054c92ee-lk852", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:01:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3327" for this suite. May 12 10:01:51.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:01:51.358: INFO: namespace replication-controller-3327 deletion completed in 6.203350856s • [SLOW TEST:16.996 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:01:51.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 10:01:56.602: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:01:56.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1286" for this suite. May 12 10:02:02.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:02:02.885: INFO: namespace container-runtime-1286 deletion completed in 6.208425133s • [SLOW TEST:11.527 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:02:02.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 10:02:02.958: INFO: Waiting up to 5m0s for pod "pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b" in namespace "emptydir-8915" to be "success or failure" May 12 10:02:02.961: INFO: Pod "pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.822062ms May 12 10:02:04.964: INFO: Pod "pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005517123s May 12 10:02:06.968: INFO: Pod "pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009326862s STEP: Saw pod success May 12 10:02:06.968: INFO: Pod "pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b" satisfied condition "success or failure" May 12 10:02:06.971: INFO: Trying to get logs from node iruya-worker2 pod pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b container test-container: STEP: delete the pod May 12 10:02:07.035: INFO: Waiting for pod pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b to disappear May 12 10:02:07.039: INFO: Pod pod-4c4d6666-b8f0-4b4b-88d4-a0ebb43e893b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:02:07.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8915" for this suite. May 12 10:02:13.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:02:13.204: INFO: namespace emptydir-8915 deletion completed in 6.161066101s • [SLOW TEST:10.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:02:13.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 10:02:13.279: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449893,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 10:02:13.279: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449893,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 10:02:23.286: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449912,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 10:02:23.286: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449912,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 10:02:33.292: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449932,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 10:02:33.292: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449932,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 10:02:43.432: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449951,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 10:02:43.432: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-a,UID:002c7b52-bd39-4626-97b9-00ebfd0f20cb,ResourceVersion:10449951,Generation:0,CreationTimestamp:2020-05-12 10:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 10:02:53.438: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-b,UID:976bae13-89a1-45a6-be3c-7d51d5998062,ResourceVersion:10449972,Generation:0,CreationTimestamp:2020-05-12 10:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 10:02:53.438: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-b,UID:976bae13-89a1-45a6-be3c-7d51d5998062,ResourceVersion:10449972,Generation:0,CreationTimestamp:2020-05-12 10:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 10:03:03.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-b,UID:976bae13-89a1-45a6-be3c-7d51d5998062,ResourceVersion:10449994,Generation:0,CreationTimestamp:2020-05-12 10:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 10:03:03.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6841,SelfLink:/api/v1/namespaces/watch-6841/configmaps/e2e-watch-test-configmap-b,UID:976bae13-89a1-45a6-be3c-7d51d5998062,ResourceVersion:10449994,Generation:0,CreationTimestamp:2020-05-12 10:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:03:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6841" for this suite. May 12 10:03:19.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:03:19.544: INFO: namespace watch-6841 deletion completed in 6.096965951s • [SLOW TEST:66.340 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:03:19.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4451/configmap-test-86533d99-b209-4a4a-97ec-28449f3f0710 STEP: Creating a pod to test consume configMaps May 12 10:03:19.779: INFO: Waiting up to 5m0s for pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0" in namespace "configmap-4451" to be "success or failure" May 12 10:03:19.782: INFO: Pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275093ms May 12 10:03:21.785: INFO: Pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006185034s May 12 10:03:23.788: INFO: Pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0": Phase="Running", Reason="", readiness=true. Elapsed: 4.009302303s May 12 10:03:25.792: INFO: Pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012990359s STEP: Saw pod success May 12 10:03:25.792: INFO: Pod "pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0" satisfied condition "success or failure" May 12 10:03:25.795: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0 container env-test: STEP: delete the pod May 12 10:03:25.814: INFO: Waiting for pod pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0 to disappear May 12 10:03:25.902: INFO: Pod pod-configmaps-79053e3e-7aa5-4943-aa6f-c87ddc844ca0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:03:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4451" for this suite. May 12 10:03:31.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:03:32.076: INFO: namespace configmap-4451 deletion completed in 6.171158191s • [SLOW TEST:12.532 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:03:32.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 12 10:03:32.564: INFO: namespace kubectl-9199 May 12 10:03:32.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9199' May 12 10:03:35.754: INFO: stderr: "" May 12 10:03:35.754: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 10:03:36.757: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:36.758: INFO: Found 0 / 1 May 12 10:03:37.765: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:37.765: INFO: Found 0 / 1 May 12 10:03:38.958: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:38.958: INFO: Found 0 / 1 May 12 10:03:39.758: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:39.758: INFO: Found 0 / 1 May 12 10:03:40.837: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:40.837: INFO: Found 0 / 1 May 12 10:03:41.783: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:41.783: INFO: Found 0 / 1 May 12 10:03:42.758: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:42.758: INFO: Found 1 / 1 May 12 10:03:42.758: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 10:03:42.760: INFO: Selector matched 1 pods for map[app:redis] May 12 10:03:42.760: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 10:03:42.760: INFO: wait on redis-master startup in kubectl-9199 May 12 10:03:42.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5zshk redis-master --namespace=kubectl-9199' May 12 10:03:42.867: INFO: stderr: "" May 12 10:03:42.867: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 10:03:41.336 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 10:03:41.336 # Server started, Redis version 3.2.12\n1:M 12 May 10:03:41.336 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 10:03:41.336 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 12 10:03:42.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9199' May 12 10:03:43.159: INFO: stderr: "" May 12 10:03:43.159: INFO: stdout: "service/rm2 exposed\n" May 12 10:03:43.186: INFO: Service rm2 in namespace kubectl-9199 found. STEP: exposing service May 12 10:03:45.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9199' May 12 10:03:45.355: INFO: stderr: "" May 12 10:03:45.355: INFO: stdout: "service/rm3 exposed\n" May 12 10:03:45.382: INFO: Service rm3 in namespace kubectl-9199 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:03:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9199" for this suite. May 12 10:04:13.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:13.479: INFO: namespace kubectl-9199 deletion completed in 26.084129439s • [SLOW TEST:41.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:04:13.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7rph STEP: Creating a pod to test atomic-volume-subpath May 12 10:04:13.624: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7rph" in namespace "subpath-314" to be "success or failure" May 12 10:04:13.628: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920051ms May 12 10:04:15.632: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444387s May 12 10:04:17.636: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 4.011735788s May 12 10:04:19.700: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 6.075679351s May 12 10:04:21.703: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 8.078858003s May 12 10:04:23.708: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 10.083375888s May 12 10:04:25.711: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 12.086399886s May 12 10:04:27.714: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 14.089569489s May 12 10:04:29.717: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 16.093181324s May 12 10:04:31.720: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 18.095716379s May 12 10:04:33.722: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 20.098225943s May 12 10:04:35.726: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Running", Reason="", readiness=true. Elapsed: 22.102038894s May 12 10:04:37.730: INFO: Pod "pod-subpath-test-configmap-7rph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.1057106s STEP: Saw pod success May 12 10:04:37.730: INFO: Pod "pod-subpath-test-configmap-7rph" satisfied condition "success or failure" May 12 10:04:37.732: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-7rph container test-container-subpath-configmap-7rph: STEP: delete the pod May 12 10:04:37.906: INFO: Waiting for pod pod-subpath-test-configmap-7rph to disappear May 12 10:04:37.940: INFO: Pod pod-subpath-test-configmap-7rph no longer exists STEP: Deleting pod pod-subpath-test-configmap-7rph May 12 10:04:37.940: INFO: Deleting pod "pod-subpath-test-configmap-7rph" in namespace "subpath-314" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:04:37.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-314" for this suite. May 12 10:04:46.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:46.241: INFO: namespace subpath-314 deletion completed in 8.249439592s • [SLOW TEST:32.760 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:04:46.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b8bfaf74-759a-4f57-915f-421118490b2e STEP: Creating a pod to test consume secrets May 12 10:04:46.432: INFO: Waiting up to 5m0s for pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25" in namespace "secrets-8661" to be "success or failure" May 12 10:04:46.737: INFO: Pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25": Phase="Pending", Reason="", readiness=false. Elapsed: 304.326719ms May 12 10:04:48.773: INFO: Pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341117587s May 12 10:04:50.777: INFO: Pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344876102s May 12 10:04:52.781: INFO: Pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.348610593s STEP: Saw pod success May 12 10:04:52.781: INFO: Pod "pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25" satisfied condition "success or failure" May 12 10:04:52.784: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25 container secret-volume-test: STEP: delete the pod May 12 10:04:52.912: INFO: Waiting for pod pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25 to disappear May 12 10:04:53.084: INFO: Pod pod-secrets-463b8fff-dc49-486e-9f75-c3def86aca25 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:04:53.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8661" for this suite. May 12 10:04:59.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:04:59.204: INFO: namespace secrets-8661 deletion completed in 6.1145805s • [SLOW TEST:12.963 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:04:59.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-htp5c in namespace proxy-3877 I0512 10:04:59.331575 6 runners.go:180] Created replication controller with name: proxy-service-htp5c, namespace: proxy-3877, replica count: 1 I0512 10:05:00.381929 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:05:01.382068 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:05:02.382206 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:05:03.382438 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:05:04.382665 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 10:05:05.382863 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 10:05:06.383062 6 runners.go:180] proxy-service-htp5c Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 10:05:06.386: INFO: setup took 7.108871359s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 10:05:06.397: INFO: (0) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 10.766191ms) May 12 10:05:06.397: INFO: (0) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 10.772358ms) May 12 10:05:06.398: INFO: (0) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 11.843509ms) May 12 10:05:06.398: INFO: (0) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 11.767573ms) May 12 10:05:06.399: INFO: (0) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 12.367749ms) May 12 10:05:06.399: INFO: (0) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 12.84329ms) May 12 10:05:06.400: INFO: (0) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 13.425153ms) May 12 10:05:06.400: INFO: (0) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 13.261291ms) May 12 10:05:06.400: INFO: (0) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 13.291726ms) May 12 10:05:06.400: INFO: (0) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 13.542332ms) May 12 10:05:06.400: INFO: (0) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 13.497631ms) May 12 10:05:06.404: INFO: (0) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 16.903791ms) May 12 10:05:06.405: INFO: (0) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 18.695221ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 14.692168ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 14.81524ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 14.771659ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 14.762406ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 14.891509ms) May 12 10:05:06.420: INFO: (1) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 14.916627ms) May 12 10:05:06.422: INFO: (1) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 16.414335ms) May 12 10:05:06.422: INFO: (1) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 16.406128ms) May 12 10:05:06.422: INFO: (1) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 17.037991ms) May 12 10:05:06.422: INFO: (1) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 17.027091ms) May 12 10:05:06.423: INFO: (1) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 17.009177ms) May 12 10:05:06.423: INFO: (1) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 17.157513ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 3.337518ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 4.225253ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 3.83934ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 2.770076ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.373064ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.077669ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.26387ms) May 12 10:05:06.427: INFO: (2) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 4.619454ms) May 12 10:05:06.428: INFO: (2) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.304725ms) May 12 10:05:06.428: INFO: (2) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 3.346888ms) May 12 10:05:06.428: INFO: (2) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 3.482416ms) May 12 10:05:06.428: INFO: (2) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 5.056118ms) May 12 10:05:06.428: INFO: (2) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 5.399703ms) May 12 10:05:06.430: INFO: (2) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 5.096415ms) May 12 10:05:06.433: INFO: (3) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.595125ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.771942ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 3.973182ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 4.020681ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 4.027358ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 4.02522ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 4.335751ms) May 12 10:05:06.434: INFO: (3) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 2.879489ms) May 12 10:05:06.439: INFO: (4) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 2.798796ms) May 12 10:05:06.447: INFO: (4) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 10.723371ms) May 12 10:05:06.447: INFO: (4) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 10.788982ms) May 12 10:05:06.447: INFO: (4) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 10.754887ms) May 12 10:05:06.447: INFO: (4) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 10.833489ms) May 12 10:05:06.447: INFO: (4) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 11.302655ms) May 12 10:05:06.448: INFO: (4) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 11.691416ms) May 12 10:05:06.448: INFO: (4) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 11.632475ms) May 12 10:05:06.448: INFO: (4) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: ... (200; 2.885292ms) May 12 10:05:06.452: INFO: (5) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 2.888118ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 3.093739ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.623887ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 3.78285ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.904224ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 3.864592ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 4.040266ms) May 12 10:05:06.453: INFO: (5) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 4.675406ms) May 12 10:05:06.459: INFO: (6) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 4.707565ms) May 12 10:05:06.459: INFO: (6) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 4.707332ms) May 12 10:05:06.459: INFO: (6) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 4.822795ms) May 12 10:05:06.459: INFO: (6) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 4.903554ms) May 12 10:05:06.462: INFO: (7) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 2.159923ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.798975ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 3.996559ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 4.196716ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.140159ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.322808ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 4.364357ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 4.579608ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 4.439648ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 4.451882ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.448106ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 4.845485ms) May 12 10:05:06.464: INFO: (7) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 4.85366ms) May 12 10:05:06.465: INFO: (7) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 4.823482ms) May 12 10:05:06.465: INFO: (7) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 5.246534ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 2.592938ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 2.641611ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 3.02408ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 3.075446ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 3.083677ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 3.216925ms) May 12 10:05:06.468: INFO: (8) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.217999ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 4.145296ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.058904ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 4.139115ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 4.224262ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 4.335245ms) May 12 10:05:06.469: INFO: (8) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 4.485874ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 4.738461ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.691021ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.712836ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.749537ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 4.752674ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 4.933795ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 4.971711ms) May 12 10:05:06.474: INFO: (9) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 5.017643ms) May 12 10:05:06.475: INFO: (9) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 5.055622ms) May 12 10:05:06.475: INFO: (9) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 5.223012ms) May 12 10:05:06.475: INFO: (9) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 5.295371ms) May 12 10:05:06.475: INFO: (9) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 5.264333ms) May 12 10:05:06.475: INFO: (9) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 3.418127ms) May 12 10:05:06.480: INFO: (10) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.475211ms) May 12 10:05:06.480: INFO: (10) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.512018ms) May 12 10:05:06.480: INFO: (10) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 4.581196ms) May 12 10:05:06.480: INFO: (10) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 4.647261ms) May 12 10:05:06.480: INFO: (10) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: ... (200; 5.143379ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 5.360333ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 5.378454ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 5.421388ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 5.425813ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 5.495273ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 5.51114ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 5.622474ms) May 12 10:05:06.481: INFO: (10) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 5.702385ms) May 12 10:05:06.484: INFO: (11) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 2.374271ms) May 12 10:05:06.484: INFO: (11) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 2.738625ms) May 12 10:05:06.484: INFO: (11) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 3.130752ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.747304ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 3.691935ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 3.775432ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.743278ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 4.031571ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.958692ms) May 12 10:05:06.485: INFO: (11) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 4.041235ms) May 12 10:05:06.486: INFO: (11) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 6.922646ms) May 12 10:05:06.493: INFO: (12) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 6.550815ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 7.464086ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 7.802421ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 8.026398ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 8.136515ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 7.541341ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 7.585093ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 8.22945ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 7.966048ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 7.560283ms) May 12 10:05:06.494: INFO: (12) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 7.682407ms) May 12 10:05:06.495: INFO: (12) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 8.194861ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 3.740552ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 3.684521ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 3.74336ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 3.960592ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 3.932252ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 3.928134ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 3.980172ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 4.035028ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 4.528958ms) May 12 10:05:06.499: INFO: (13) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.578559ms) May 12 10:05:06.500: INFO: (13) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.742515ms) May 12 10:05:06.500: INFO: (13) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.675486ms) May 12 10:05:06.500: INFO: (13) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 4.651962ms) May 12 10:05:06.500: INFO: (13) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test<... (200; 4.747356ms) May 12 10:05:06.500: INFO: (13) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.718687ms) May 12 10:05:06.503: INFO: (14) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 2.885476ms) May 12 10:05:06.503: INFO: (14) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 3.459368ms) May 12 10:05:06.503: INFO: (14) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 3.560816ms) May 12 10:05:06.503: INFO: (14) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.678574ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 3.991481ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.010876ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 4.017026ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: ... (200; 4.16876ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 4.105123ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 4.242507ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.260193ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.348559ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 4.354713ms) May 12 10:05:06.504: INFO: (14) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 4.297974ms) May 12 10:05:06.507: INFO: (15) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.265464ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 3.349063ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 3.778029ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 3.75968ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 4.032303ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.229945ms) May 12 10:05:06.508: INFO: (15) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: ... (200; 4.55049ms) May 12 10:05:06.509: INFO: (15) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 4.806877ms) May 12 10:05:06.509: INFO: (15) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 4.872199ms) May 12 10:05:06.509: INFO: (15) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 4.929273ms) May 12 10:05:06.509: INFO: (15) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 5.090026ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.454886ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 4.627164ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.612116ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 4.861432ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.60073ms) May 12 10:05:06.514: INFO: (16) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 5.027124ms) May 12 10:05:06.515: INFO: (16) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 5.190502ms) May 12 10:05:06.515: INFO: (16) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 5.226918ms) May 12 10:05:06.515: INFO: (16) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 5.916401ms) May 12 10:05:06.515: INFO: (16) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 6.070414ms) May 12 10:05:06.516: INFO: (16) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 6.092359ms) May 12 10:05:06.516: INFO: (16) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 6.065701ms) May 12 10:05:06.516: INFO: (16) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 6.423801ms) May 12 10:05:06.516: INFO: (16) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 6.527642ms) May 12 10:05:06.516: INFO: (16) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 6.663708ms) May 12 10:05:06.518: INFO: (17) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 2.156827ms) May 12 10:05:06.519: INFO: (17) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.28569ms) May 12 10:05:06.520: INFO: (17) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 4.066862ms) May 12 10:05:06.521: INFO: (17) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 5.045058ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 5.431678ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 5.533678ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 5.471821ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 5.592914ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 5.578246ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 5.60234ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 5.661284ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 5.605149ms) May 12 10:05:06.522: INFO: (17) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: test (200; 3.278406ms) May 12 10:05:06.525: INFO: (18) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:160/proxy/: foo (200; 3.449386ms) May 12 10:05:06.527: INFO: (18) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:460/proxy/: tls baz (200; 4.621821ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 5.549738ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 5.393657ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 5.581753ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 5.962187ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 5.902558ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 5.941024ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname1/proxy/: tls baz (200; 5.978425ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 5.975523ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname1/proxy/: foo (200; 6.033393ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:462/proxy/: tls qux (200; 6.174249ms) May 12 10:05:06.528: INFO: (18) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 6.29862ms) May 12 10:05:06.535: INFO: (19) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname2/proxy/: bar (200; 6.777312ms) May 12 10:05:06.535: INFO: (19) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:1080/proxy/: test<... (200; 6.822964ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:160/proxy/: foo (200; 11.748955ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft/proxy/: test (200; 11.826975ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:1080/proxy/: ... (200; 11.847598ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/services/https:proxy-service-htp5c:tlsportname2/proxy/: tls qux (200; 11.900492ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/services/http:proxy-service-htp5c:portname2/proxy/: bar (200; 11.882287ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/pods/http:proxy-service-htp5c-wplft:162/proxy/: bar (200; 11.857225ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/services/proxy-service-htp5c:portname1/proxy/: foo (200; 11.930352ms) May 12 10:05:06.540: INFO: (19) /api/v1/namespaces/proxy-3877/pods/proxy-service-htp5c-wplft:162/proxy/: bar (200; 11.843162ms) May 12 10:05:06.541: INFO: (19) /api/v1/namespaces/proxy-3877/pods/https:proxy-service-htp5c-wplft:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:05:26.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-504" for this suite. May 12 10:05:48.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:05:48.258: INFO: namespace replication-controller-504 deletion completed in 22.075140334s • [SLOW TEST:29.182 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:05:48.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9001/secret-test-5ec2e6a0-05e3-43b3-95fd-12e21d1b80ee STEP: Creating a pod to test consume secrets May 12 10:05:48.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c" in namespace "secrets-9001" to be "success or failure" May 12 10:05:48.386: INFO: Pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.053648ms May 12 10:05:50.390: INFO: Pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02698338s May 12 10:05:52.394: INFO: Pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030701195s May 12 10:05:54.398: INFO: Pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03522352s STEP: Saw pod success May 12 10:05:54.398: INFO: Pod "pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c" satisfied condition "success or failure" May 12 10:05:54.402: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c container env-test: STEP: delete the pod May 12 10:05:54.449: INFO: Waiting for pod pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c to disappear May 12 10:05:54.467: INFO: Pod pod-configmaps-d70d5489-31dd-4678-aa0c-d19a1b93480c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:05:54.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9001" for this suite. May 12 10:06:02.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:02.564: INFO: namespace secrets-9001 deletion completed in 8.093395605s • [SLOW TEST:14.306 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:06:02.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b991da78-6cb7-4c2a-b147-0d7f0584066f STEP: Creating a pod to test consume secrets May 12 10:06:02.674: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165" in namespace "projected-176" to be "success or failure" May 12 10:06:02.684: INFO: Pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742502ms May 12 10:06:04.687: INFO: Pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012972156s May 12 10:06:06.692: INFO: Pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165": Phase="Running", Reason="", readiness=true. Elapsed: 4.017062387s May 12 10:06:08.695: INFO: Pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02093826s STEP: Saw pod success May 12 10:06:08.695: INFO: Pod "pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165" satisfied condition "success or failure" May 12 10:06:08.699: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165 container projected-secret-volume-test: STEP: delete the pod May 12 10:06:08.739: INFO: Waiting for pod pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165 to disappear May 12 10:06:08.780: INFO: Pod pod-projected-secrets-3dee5fd4-fa28-4a37-9cf6-b86d42d60165 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:06:08.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-176" for this suite. May 12 10:06:14.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:14.871: INFO: namespace projected-176 deletion completed in 6.087163339s • [SLOW TEST:12.307 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:06:14.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e6e2e5a6-438d-40c6-8b63-e3b9ebba2e73 STEP: Creating a pod to test consume configMaps May 12 10:06:14.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b" in namespace "configmap-151" to be "success or failure" May 12 10:06:15.037: INFO: Pod "pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.163702ms May 12 10:06:17.040: INFO: Pod "pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059279185s May 12 10:06:19.043: INFO: Pod "pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062552494s STEP: Saw pod success May 12 10:06:19.043: INFO: Pod "pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b" satisfied condition "success or failure" May 12 10:06:19.046: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b container configmap-volume-test: STEP: delete the pod May 12 10:06:19.064: INFO: Waiting for pod pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b to disappear May 12 10:06:19.068: INFO: Pod pod-configmaps-d7c28bf5-2954-41a5-a170-5d1b319a864b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:06:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-151" for this suite. May 12 10:06:27.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:27.160: INFO: namespace configmap-151 deletion completed in 8.088459835s • [SLOW TEST:12.288 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:06:27.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-53c58143-88fb-47db-b063-8bb64f4f03e4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-53c58143-88fb-47db-b063-8bb64f4f03e4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:06:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9943" for this suite. May 12 10:06:57.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:06:57.448: INFO: namespace configmap-9943 deletion completed in 24.125467176s • [SLOW TEST:30.288 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:06:57.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 12 10:06:57.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1080' May 12 10:06:57.945: INFO: stderr: "" May 12 10:06:57.945: INFO: stdout: "pod/pause created\n" May 12 10:06:57.945: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 10:06:57.945: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1080" to be "running and ready" May 12 10:06:58.003: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 57.467041ms May 12 10:07:00.007: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061563769s May 12 10:07:02.104: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159194372s May 12 10:07:04.108: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.162731428s May 12 10:07:04.108: INFO: Pod "pause" satisfied condition "running and ready" May 12 10:07:04.108: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 12 10:07:04.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1080' May 12 10:07:04.211: INFO: stderr: "" May 12 10:07:04.211: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 10:07:04.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1080' May 12 10:07:04.331: INFO: stderr: "" May 12 10:07:04.331: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 10:07:04.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1080' May 12 10:07:04.426: INFO: stderr: "" May 12 10:07:04.426: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 10:07:04.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1080' May 12 10:07:04.538: INFO: stderr: "" May 12 10:07:04.538: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 12 10:07:04.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1080' May 12 10:07:04.851: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:07:04.851: INFO: stdout: "pod \"pause\" force deleted\n" May 12 10:07:04.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1080' May 12 10:07:05.487: INFO: stderr: "No resources found.\n" May 12 10:07:05.487: INFO: stdout: "" May 12 10:07:05.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1080 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:07:05.734: INFO: stderr: "" May 12 10:07:05.734: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:07:05.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1080" for this suite. May 12 10:07:12.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:07:12.374: INFO: namespace kubectl-1080 deletion completed in 6.454097033s • [SLOW TEST:14.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:07:12.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-cc6e6345-ed8a-4802-8c19-9a19e052fe0c STEP: Creating a pod to test consume secrets May 12 10:07:12.786: INFO: Waiting up to 5m0s for pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549" in namespace "secrets-1935" to be "success or failure" May 12 10:07:12.824: INFO: Pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549": Phase="Pending", Reason="", readiness=false. Elapsed: 38.710135ms May 12 10:07:14.828: INFO: Pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042830802s May 12 10:07:16.832: INFO: Pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046327132s May 12 10:07:18.835: INFO: Pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049590521s STEP: Saw pod success May 12 10:07:18.835: INFO: Pod "pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549" satisfied condition "success or failure" May 12 10:07:18.838: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549 container secret-volume-test: STEP: delete the pod May 12 10:07:18.868: INFO: Waiting for pod pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549 to disappear May 12 10:07:18.884: INFO: Pod pod-secrets-cc3898a6-9d83-4e45-a16a-332a1cbb2549 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:07:18.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1935" for this suite. May 12 10:07:26.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:07:26.962: INFO: namespace secrets-1935 deletion completed in 8.074379107s • [SLOW TEST:14.588 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:07:26.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-99caf1e9-a13d-4e1e-9bb3-f03ef53e7bd0 STEP: Creating a pod to test consume configMaps May 12 10:07:27.048: INFO: Waiting up to 5m0s for pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e" in namespace "configmap-41" to be "success or failure" May 12 10:07:27.115: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e": Phase="Pending", Reason="", readiness=false. Elapsed: 67.879528ms May 12 10:07:29.120: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072152075s May 12 10:07:31.124: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076244573s May 12 10:07:33.127: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e": Phase="Running", Reason="", readiness=true. Elapsed: 6.079343029s May 12 10:07:35.164: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116120515s STEP: Saw pod success May 12 10:07:35.164: INFO: Pod "pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e" satisfied condition "success or failure" May 12 10:07:35.201: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e container configmap-volume-test: STEP: delete the pod May 12 10:07:35.219: INFO: Waiting for pod pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e to disappear May 12 10:07:35.224: INFO: Pod pod-configmaps-c789c8c9-daa9-4aa8-9fb5-db219d37a49e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:07:35.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-41" for this suite. May 12 10:07:43.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:07:43.894: INFO: namespace configmap-41 deletion completed in 8.666651676s • [SLOW TEST:16.931 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:07:43.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:07:43.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1532' May 12 10:07:44.191: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 10:07:44.191: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 12 10:07:44.315: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6x45j] May 12 10:07:44.315: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6x45j" in namespace "kubectl-1532" to be "running and ready" May 12 10:07:44.440: INFO: Pod "e2e-test-nginx-rc-6x45j": Phase="Pending", Reason="", readiness=false. Elapsed: 124.471459ms May 12 10:07:46.443: INFO: Pod "e2e-test-nginx-rc-6x45j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127362018s May 12 10:07:48.739: INFO: Pod "e2e-test-nginx-rc-6x45j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423709081s May 12 10:07:50.752: INFO: Pod "e2e-test-nginx-rc-6x45j": Phase="Running", Reason="", readiness=true. Elapsed: 6.436383776s May 12 10:07:50.752: INFO: Pod "e2e-test-nginx-rc-6x45j" satisfied condition "running and ready" May 12 10:07:50.752: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6x45j] May 12 10:07:50.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1532' May 12 10:07:51.255: INFO: stderr: "" May 12 10:07:51.255: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 12 10:07:51.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1532' May 12 10:07:51.769: INFO: stderr: "" May 12 10:07:51.769: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:07:51.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1532" for this suite. May 12 10:08:14.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:08:14.587: INFO: namespace kubectl-1532 deletion completed in 22.797552955s • [SLOW TEST:30.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:08:14.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:08:14.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8027" for this suite. May 12 10:08:20.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:08:20.773: INFO: namespace services-8027 deletion completed in 6.07133657s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.186 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:08:20.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e9152f3c-01aa-4d5f-9827-171700de386d in namespace container-probe-3795 May 12 10:08:24.887: INFO: Started pod liveness-e9152f3c-01aa-4d5f-9827-171700de386d in namespace container-probe-3795 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:08:24.891: INFO: Initial restart count of pod liveness-e9152f3c-01aa-4d5f-9827-171700de386d is 0 May 12 10:08:51.253: INFO: Restart count of pod container-probe-3795/liveness-e9152f3c-01aa-4d5f-9827-171700de386d is now 1 (26.362170959s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:08:51.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3795" for this suite. May 12 10:08:59.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:08:59.485: INFO: namespace container-probe-3795 deletion completed in 8.092670153s • [SLOW TEST:38.711 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:08:59.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:08:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3921" for this suite. May 12 10:09:07.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:09:07.876: INFO: namespace kubelet-test-3921 deletion completed in 8.240800215s • [SLOW TEST:8.391 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:09:07.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 10:09:14.861: INFO: Successfully updated pod "annotationupdateb2e2b4d9-9157-4368-b66e-01428662ef3b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:09:16.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9962" for this suite. May 12 10:09:39.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:09:39.208: INFO: namespace projected-9962 deletion completed in 22.294367874s • [SLOW TEST:31.331 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:09:39.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:09:39.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8" in namespace "downward-api-7233" to be "success or failure" May 12 10:09:39.280: INFO: Pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346337ms May 12 10:09:41.440: INFO: Pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169776549s May 12 10:09:43.680: INFO: Pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8": Phase="Running", Reason="", readiness=true. Elapsed: 4.409609614s May 12 10:09:46.195: INFO: Pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.924566338s STEP: Saw pod success May 12 10:09:46.195: INFO: Pod "downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8" satisfied condition "success or failure" May 12 10:09:46.198: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8 container client-container: STEP: delete the pod May 12 10:09:46.718: INFO: Waiting for pod downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8 to disappear May 12 10:09:46.741: INFO: Pod downwardapi-volume-eb301d2a-82d2-46ad-8c28-ae4e22195dd8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:09:46.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7233" for this suite. May 12 10:09:54.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:09:54.956: INFO: namespace downward-api-7233 deletion completed in 8.210887803s • [SLOW TEST:15.748 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:09:54.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8770 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8770 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8770 May 12 10:09:56.151: INFO: Found 0 stateful pods, waiting for 1 May 12 10:10:06.154: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 10:10:06.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:10:06.529: INFO: stderr: "I0512 10:10:06.327878 355 log.go:172] (0xc000116dc0) (0xc000870640) Create stream\nI0512 10:10:06.327921 355 log.go:172] (0xc000116dc0) (0xc000870640) Stream added, broadcasting: 1\nI0512 10:10:06.329690 355 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0512 10:10:06.329712 355 log.go:172] (0xc000116dc0) (0xc000866000) Create stream\nI0512 10:10:06.329717 355 log.go:172] (0xc000116dc0) (0xc000866000) Stream added, broadcasting: 3\nI0512 10:10:06.330393 355 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0512 10:10:06.330419 355 log.go:172] (0xc000116dc0) (0xc00059c280) Create stream\nI0512 10:10:06.330430 355 log.go:172] (0xc000116dc0) (0xc00059c280) Stream added, broadcasting: 5\nI0512 10:10:06.331016 355 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0512 10:10:06.402177 355 log.go:172] (0xc000116dc0) Data frame received for 5\nI0512 10:10:06.402191 355 log.go:172] (0xc00059c280) (5) Data frame handling\nI0512 10:10:06.402200 355 log.go:172] (0xc00059c280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:10:06.524348 355 log.go:172] (0xc000116dc0) Data frame received for 5\nI0512 10:10:06.524387 355 log.go:172] (0xc000116dc0) Data frame received for 3\nI0512 10:10:06.524416 355 log.go:172] (0xc000866000) (3) Data frame handling\nI0512 10:10:06.524429 355 log.go:172] (0xc000866000) (3) Data frame sent\nI0512 10:10:06.524444 355 log.go:172] (0xc000116dc0) Data frame received for 3\nI0512 10:10:06.524453 355 log.go:172] (0xc000866000) (3) Data frame handling\nI0512 10:10:06.524487 355 log.go:172] (0xc00059c280) (5) Data frame handling\nI0512 10:10:06.525798 355 log.go:172] (0xc000116dc0) Data frame received for 1\nI0512 10:10:06.525817 355 log.go:172] (0xc000870640) (1) Data frame handling\nI0512 10:10:06.525829 355 log.go:172] (0xc000870640) (1) Data frame sent\nI0512 10:10:06.525854 355 log.go:172] (0xc000116dc0) (0xc000870640) Stream removed, broadcasting: 1\nI0512 10:10:06.525869 355 log.go:172] (0xc000116dc0) Go away received\nI0512 10:10:06.526106 355 log.go:172] (0xc000116dc0) (0xc000870640) Stream removed, broadcasting: 1\nI0512 10:10:06.526116 355 log.go:172] (0xc000116dc0) (0xc000866000) Stream removed, broadcasting: 3\nI0512 10:10:06.526121 355 log.go:172] (0xc000116dc0) (0xc00059c280) Stream removed, broadcasting: 5\n" May 12 10:10:06.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:10:06.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:10:06.532: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 10:10:16.537: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:10:16.537: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:10:16.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999693s May 12 10:10:17.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.846867301s May 12 10:10:18.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.805438564s May 12 10:10:19.749: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.801280771s May 12 10:10:20.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.797606544s May 12 10:10:21.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.794050563s May 12 10:10:22.769: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.782267356s May 12 10:10:23.774: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.777464501s May 12 10:10:24.778: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.772549629s May 12 10:10:25.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 768.775915ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8770 May 12 10:10:26.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:10:26.982: INFO: stderr: "I0512 10:10:26.920078 376 log.go:172] (0xc0008e6420) (0xc00035c820) Create stream\nI0512 10:10:26.920131 376 log.go:172] (0xc0008e6420) (0xc00035c820) Stream added, broadcasting: 1\nI0512 10:10:26.922713 376 log.go:172] (0xc0008e6420) Reply frame received for 1\nI0512 10:10:26.922760 376 log.go:172] (0xc0008e6420) (0xc0003ce000) Create stream\nI0512 10:10:26.922776 376 log.go:172] (0xc0008e6420) (0xc0003ce000) Stream added, broadcasting: 3\nI0512 10:10:26.923866 376 log.go:172] (0xc0008e6420) Reply frame received for 3\nI0512 10:10:26.923933 376 log.go:172] (0xc0008e6420) (0xc000948000) Create stream\nI0512 10:10:26.923955 376 log.go:172] (0xc0008e6420) (0xc000948000) Stream added, broadcasting: 5\nI0512 10:10:26.925021 376 log.go:172] (0xc0008e6420) Reply frame received for 5\nI0512 10:10:26.976184 376 log.go:172] (0xc0008e6420) Data frame received for 3\nI0512 10:10:26.976217 376 log.go:172] (0xc0003ce000) (3) Data frame handling\nI0512 10:10:26.976230 376 log.go:172] (0xc0003ce000) (3) Data frame sent\nI0512 10:10:26.976251 376 log.go:172] (0xc0008e6420) Data frame received for 3\nI0512 10:10:26.976293 376 log.go:172] (0xc0008e6420) Data frame received for 5\nI0512 10:10:26.976349 376 log.go:172] (0xc000948000) (5) Data frame handling\nI0512 10:10:26.976368 376 log.go:172] (0xc000948000) (5) Data frame sent\nI0512 10:10:26.976382 376 log.go:172] (0xc0008e6420) Data frame received for 5\nI0512 10:10:26.976398 376 log.go:172] (0xc000948000) (5) Data frame handling\nI0512 10:10:26.976420 376 log.go:172] (0xc0003ce000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:10:26.977455 376 log.go:172] (0xc0008e6420) Data frame received for 1\nI0512 10:10:26.977468 376 log.go:172] (0xc00035c820) (1) Data frame handling\nI0512 10:10:26.977480 376 log.go:172] (0xc00035c820) (1) Data frame sent\nI0512 10:10:26.977562 376 log.go:172] (0xc0008e6420) (0xc00035c820) Stream removed, broadcasting: 1\nI0512 10:10:26.977579 376 log.go:172] (0xc0008e6420) Go away received\nI0512 10:10:26.977932 376 log.go:172] (0xc0008e6420) (0xc00035c820) Stream removed, broadcasting: 1\nI0512 10:10:26.977953 376 log.go:172] (0xc0008e6420) (0xc0003ce000) Stream removed, broadcasting: 3\nI0512 10:10:26.977964 376 log.go:172] (0xc0008e6420) (0xc000948000) Stream removed, broadcasting: 5\n" May 12 10:10:26.982: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:10:26.982: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:10:26.986: INFO: Found 1 stateful pods, waiting for 3 May 12 10:10:36.991: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:10:36.991: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:10:36.991: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:10:46.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:10:46.989: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:10:46.989: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 10:10:46.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:10:47.194: INFO: stderr: "I0512 10:10:47.116940 397 log.go:172] (0xc00060e420) (0xc0003e66e0) Create stream\nI0512 10:10:47.116989 397 log.go:172] (0xc00060e420) (0xc0003e66e0) Stream added, broadcasting: 1\nI0512 10:10:47.118864 397 log.go:172] (0xc00060e420) Reply frame received for 1\nI0512 10:10:47.118913 397 log.go:172] (0xc00060e420) (0xc00083a000) Create stream\nI0512 10:10:47.118930 397 log.go:172] (0xc00060e420) (0xc00083a000) Stream added, broadcasting: 3\nI0512 10:10:47.119748 397 log.go:172] (0xc00060e420) Reply frame received for 3\nI0512 10:10:47.119765 397 log.go:172] (0xc00060e420) (0xc00083a0a0) Create stream\nI0512 10:10:47.119771 397 log.go:172] (0xc00060e420) (0xc00083a0a0) Stream added, broadcasting: 5\nI0512 10:10:47.120342 397 log.go:172] (0xc00060e420) Reply frame received for 5\nI0512 10:10:47.188066 397 log.go:172] (0xc00060e420) Data frame received for 3\nI0512 10:10:47.188082 397 log.go:172] (0xc00083a000) (3) Data frame handling\nI0512 10:10:47.188089 397 log.go:172] (0xc00083a000) (3) Data frame sent\nI0512 10:10:47.188168 397 log.go:172] (0xc00060e420) Data frame received for 3\nI0512 10:10:47.188205 397 log.go:172] (0xc00083a000) (3) Data frame handling\nI0512 10:10:47.188240 397 log.go:172] (0xc00060e420) Data frame received for 5\nI0512 10:10:47.188258 397 log.go:172] (0xc00083a0a0) (5) Data frame handling\nI0512 10:10:47.188273 397 log.go:172] (0xc00083a0a0) (5) Data frame sent\nI0512 10:10:47.188290 397 log.go:172] (0xc00060e420) Data frame received for 5\nI0512 10:10:47.188306 397 log.go:172] (0xc00083a0a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:10:47.189507 397 log.go:172] (0xc00060e420) Data frame received for 1\nI0512 10:10:47.189540 397 log.go:172] (0xc0003e66e0) (1) Data frame handling\nI0512 10:10:47.189565 397 log.go:172] (0xc0003e66e0) (1) Data frame sent\nI0512 10:10:47.189595 397 log.go:172] (0xc00060e420) (0xc0003e66e0) Stream removed, broadcasting: 1\nI0512 10:10:47.189625 397 log.go:172] (0xc00060e420) Go away received\nI0512 10:10:47.189988 397 log.go:172] (0xc00060e420) (0xc0003e66e0) Stream removed, broadcasting: 1\nI0512 10:10:47.190006 397 log.go:172] (0xc00060e420) (0xc00083a000) Stream removed, broadcasting: 3\nI0512 10:10:47.190015 397 log.go:172] (0xc00060e420) (0xc00083a0a0) Stream removed, broadcasting: 5\n" May 12 10:10:47.194: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:10:47.194: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:10:47.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:10:47.448: INFO: stderr: "I0512 10:10:47.304355 417 log.go:172] (0xc0001166e0) (0xc000948640) Create stream\nI0512 10:10:47.304388 417 log.go:172] (0xc0001166e0) (0xc000948640) Stream added, broadcasting: 1\nI0512 10:10:47.306481 417 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0512 10:10:47.306520 417 log.go:172] (0xc0001166e0) (0xc0009486e0) Create stream\nI0512 10:10:47.306535 417 log.go:172] (0xc0001166e0) (0xc0009486e0) Stream added, broadcasting: 3\nI0512 10:10:47.307397 417 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0512 10:10:47.307436 417 log.go:172] (0xc0001166e0) (0xc000596320) Create stream\nI0512 10:10:47.307453 417 log.go:172] (0xc0001166e0) (0xc000596320) Stream added, broadcasting: 5\nI0512 10:10:47.308280 417 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0512 10:10:47.398236 417 log.go:172] (0xc0001166e0) Data frame received for 5\nI0512 10:10:47.398262 417 log.go:172] (0xc000596320) (5) Data frame handling\nI0512 10:10:47.398281 417 log.go:172] (0xc000596320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:10:47.441700 417 log.go:172] (0xc0001166e0) Data frame received for 3\nI0512 10:10:47.441769 417 log.go:172] (0xc0009486e0) (3) Data frame handling\nI0512 10:10:47.441797 417 log.go:172] (0xc0009486e0) (3) Data frame sent\nI0512 10:10:47.441826 417 log.go:172] (0xc0001166e0) Data frame received for 3\nI0512 10:10:47.441896 417 log.go:172] (0xc0009486e0) (3) Data frame handling\nI0512 10:10:47.441912 417 log.go:172] (0xc0001166e0) Data frame received for 5\nI0512 10:10:47.441918 417 log.go:172] (0xc000596320) (5) Data frame handling\nI0512 10:10:47.443128 417 log.go:172] (0xc0001166e0) Data frame received for 1\nI0512 10:10:47.443152 417 log.go:172] (0xc000948640) (1) Data frame handling\nI0512 10:10:47.443168 417 log.go:172] (0xc000948640) (1) Data frame sent\nI0512 10:10:47.443197 417 log.go:172] (0xc0001166e0) (0xc000948640) Stream removed, broadcasting: 1\nI0512 10:10:47.443283 417 log.go:172] (0xc0001166e0) Go away received\nI0512 10:10:47.443468 417 log.go:172] (0xc0001166e0) (0xc000948640) Stream removed, broadcasting: 1\nI0512 10:10:47.443487 417 log.go:172] (0xc0001166e0) (0xc0009486e0) Stream removed, broadcasting: 3\nI0512 10:10:47.443498 417 log.go:172] (0xc0001166e0) (0xc000596320) Stream removed, broadcasting: 5\n" May 12 10:10:47.448: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:10:47.448: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:10:47.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:10:47.713: INFO: stderr: "I0512 10:10:47.594099 437 log.go:172] (0xc000960790) (0xc000986aa0) Create stream\nI0512 10:10:47.594147 437 log.go:172] (0xc000960790) (0xc000986aa0) Stream added, broadcasting: 1\nI0512 10:10:47.597527 437 log.go:172] (0xc000960790) Reply frame received for 1\nI0512 10:10:47.597608 437 log.go:172] (0xc000960790) (0xc0008680a0) Create stream\nI0512 10:10:47.597622 437 log.go:172] (0xc000960790) (0xc0008680a0) Stream added, broadcasting: 3\nI0512 10:10:47.598695 437 log.go:172] (0xc000960790) Reply frame received for 3\nI0512 10:10:47.598724 437 log.go:172] (0xc000960790) (0xc000868000) Create stream\nI0512 10:10:47.598732 437 log.go:172] (0xc000960790) (0xc000868000) Stream added, broadcasting: 5\nI0512 10:10:47.599472 437 log.go:172] (0xc000960790) Reply frame received for 5\nI0512 10:10:47.656411 437 log.go:172] (0xc000960790) Data frame received for 5\nI0512 10:10:47.656440 437 log.go:172] (0xc000868000) (5) Data frame handling\nI0512 10:10:47.656464 437 log.go:172] (0xc000868000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:10:47.704980 437 log.go:172] (0xc000960790) Data frame received for 5\nI0512 10:10:47.705076 437 log.go:172] (0xc000868000) (5) Data frame handling\nI0512 10:10:47.705292 437 log.go:172] (0xc000960790) Data frame received for 3\nI0512 10:10:47.705311 437 log.go:172] (0xc0008680a0) (3) Data frame handling\nI0512 10:10:47.705328 437 log.go:172] (0xc0008680a0) (3) Data frame sent\nI0512 10:10:47.705377 437 log.go:172] (0xc000960790) Data frame received for 3\nI0512 10:10:47.705396 437 log.go:172] (0xc0008680a0) (3) Data frame handling\nI0512 10:10:47.708241 437 log.go:172] (0xc000960790) Data frame received for 1\nI0512 10:10:47.708290 437 log.go:172] (0xc000986aa0) (1) Data frame handling\nI0512 10:10:47.708306 437 log.go:172] (0xc000986aa0) (1) Data frame sent\nI0512 10:10:47.708317 437 log.go:172] (0xc000960790) (0xc000986aa0) Stream removed, broadcasting: 1\nI0512 10:10:47.708590 437 log.go:172] (0xc000960790) (0xc000986aa0) Stream removed, broadcasting: 1\nI0512 10:10:47.708605 437 log.go:172] (0xc000960790) (0xc0008680a0) Stream removed, broadcasting: 3\nI0512 10:10:47.708613 437 log.go:172] (0xc000960790) (0xc000868000) Stream removed, broadcasting: 5\n" May 12 10:10:47.713: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:10:47.713: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:10:47.713: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:10:47.717: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 10:10:57.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 10:10:57.741: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 10:10:57.741: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 10:10:57.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999772s May 12 10:10:58.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990573963s May 12 10:10:59.765: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986006369s May 12 10:11:00.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982978603s May 12 10:11:01.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.891319394s May 12 10:11:02.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.867262309s May 12 10:11:03.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.861485702s May 12 10:11:04.909: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.856846769s May 12 10:11:05.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.839092995s May 12 10:11:06.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 834.483232ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8770 May 12 10:11:07.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:11:08.110: INFO: stderr: "I0512 10:11:08.044574 459 log.go:172] (0xc0006ee8f0) (0xc000720780) Create stream\nI0512 10:11:08.044612 459 log.go:172] (0xc0006ee8f0) (0xc000720780) Stream added, broadcasting: 1\nI0512 10:11:08.045783 459 log.go:172] (0xc0006ee8f0) Reply frame received for 1\nI0512 10:11:08.045815 459 log.go:172] (0xc0006ee8f0) (0xc0009bc000) Create stream\nI0512 10:11:08.045832 459 log.go:172] (0xc0006ee8f0) (0xc0009bc000) Stream added, broadcasting: 3\nI0512 10:11:08.046308 459 log.go:172] (0xc0006ee8f0) Reply frame received for 3\nI0512 10:11:08.046326 459 log.go:172] (0xc0006ee8f0) (0xc000a0a000) Create stream\nI0512 10:11:08.046332 459 log.go:172] (0xc0006ee8f0) (0xc000a0a000) Stream added, broadcasting: 5\nI0512 10:11:08.046787 459 log.go:172] (0xc0006ee8f0) Reply frame received for 5\nI0512 10:11:08.106088 459 log.go:172] (0xc0006ee8f0) Data frame received for 3\nI0512 10:11:08.106115 459 log.go:172] (0xc0006ee8f0) Data frame received for 5\nI0512 10:11:08.106155 459 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0512 10:11:08.106174 459 log.go:172] (0xc000a0a000) (5) Data frame sent\nI0512 10:11:08.106182 459 log.go:172] (0xc0006ee8f0) Data frame received for 5\nI0512 10:11:08.106189 459 log.go:172] (0xc000a0a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:11:08.106204 459 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0512 10:11:08.106217 459 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0512 10:11:08.106226 459 log.go:172] (0xc0006ee8f0) Data frame received for 3\nI0512 10:11:08.106235 459 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0512 10:11:08.107060 459 log.go:172] (0xc0006ee8f0) Data frame received for 1\nI0512 10:11:08.107082 459 log.go:172] (0xc000720780) (1) Data frame handling\nI0512 10:11:08.107103 459 log.go:172] (0xc000720780) (1) Data frame sent\nI0512 10:11:08.107123 459 log.go:172] (0xc0006ee8f0) (0xc000720780) Stream removed, broadcasting: 1\nI0512 10:11:08.107266 459 log.go:172] (0xc0006ee8f0) Go away received\nI0512 10:11:08.107459 459 log.go:172] (0xc0006ee8f0) (0xc000720780) Stream removed, broadcasting: 1\nI0512 10:11:08.107473 459 log.go:172] (0xc0006ee8f0) (0xc0009bc000) Stream removed, broadcasting: 3\nI0512 10:11:08.107482 459 log.go:172] (0xc0006ee8f0) (0xc000a0a000) Stream removed, broadcasting: 5\n" May 12 10:11:08.110: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:11:08.110: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:11:08.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:11:08.307: INFO: stderr: "I0512 10:11:08.247894 478 log.go:172] (0xc00013edc0) (0xc0005926e0) Create stream\nI0512 10:11:08.247978 478 log.go:172] (0xc00013edc0) (0xc0005926e0) Stream added, broadcasting: 1\nI0512 10:11:08.250544 478 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0512 10:11:08.250589 478 log.go:172] (0xc00013edc0) (0xc000594000) Create stream\nI0512 10:11:08.250608 478 log.go:172] (0xc00013edc0) (0xc000594000) Stream added, broadcasting: 3\nI0512 10:11:08.251266 478 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0512 10:11:08.251299 478 log.go:172] (0xc00013edc0) (0xc0008a4000) Create stream\nI0512 10:11:08.251322 478 log.go:172] (0xc00013edc0) (0xc0008a4000) Stream added, broadcasting: 5\nI0512 10:11:08.251924 478 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0512 10:11:08.299762 478 log.go:172] (0xc00013edc0) Data frame received for 5\nI0512 10:11:08.299793 478 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0512 10:11:08.299825 478 log.go:172] (0xc0008a4000) (5) Data frame sent\nI0512 10:11:08.299842 478 log.go:172] (0xc00013edc0) Data frame received for 5\nI0512 10:11:08.299857 478 log.go:172] (0xc0008a4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:11:08.299996 478 log.go:172] (0xc00013edc0) Data frame received for 3\nI0512 10:11:08.300034 478 log.go:172] (0xc000594000) (3) Data frame handling\nI0512 10:11:08.300056 478 log.go:172] (0xc000594000) (3) Data frame sent\nI0512 10:11:08.300068 478 log.go:172] (0xc00013edc0) Data frame received for 3\nI0512 10:11:08.300081 478 log.go:172] (0xc000594000) (3) Data frame handling\nI0512 10:11:08.301961 478 log.go:172] (0xc00013edc0) Data frame received for 1\nI0512 10:11:08.301993 478 log.go:172] (0xc0005926e0) (1) Data frame handling\nI0512 10:11:08.302036 478 log.go:172] (0xc0005926e0) (1) Data frame sent\nI0512 10:11:08.302062 478 log.go:172] (0xc00013edc0) (0xc0005926e0) Stream removed, broadcasting: 1\nI0512 10:11:08.302091 478 log.go:172] (0xc00013edc0) Go away received\nI0512 10:11:08.302555 478 log.go:172] (0xc00013edc0) (0xc0005926e0) Stream removed, broadcasting: 1\nI0512 10:11:08.302580 478 log.go:172] (0xc00013edc0) (0xc000594000) Stream removed, broadcasting: 3\nI0512 10:11:08.302596 478 log.go:172] (0xc00013edc0) (0xc0008a4000) Stream removed, broadcasting: 5\n" May 12 10:11:08.307: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:11:08.307: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:11:08.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8770 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:11:08.707: INFO: stderr: "I0512 10:11:08.428663 496 log.go:172] (0xc00098e420) (0xc0002ce820) Create stream\nI0512 10:11:08.428719 496 log.go:172] (0xc00098e420) (0xc0002ce820) Stream added, broadcasting: 1\nI0512 10:11:08.430900 496 log.go:172] (0xc00098e420) Reply frame received for 1\nI0512 10:11:08.430941 496 log.go:172] (0xc00098e420) (0xc0006ac3c0) Create stream\nI0512 10:11:08.430952 496 log.go:172] (0xc00098e420) (0xc0006ac3c0) Stream added, broadcasting: 3\nI0512 10:11:08.431739 496 log.go:172] (0xc00098e420) Reply frame received for 3\nI0512 10:11:08.431768 496 log.go:172] (0xc00098e420) (0xc0006ac460) Create stream\nI0512 10:11:08.431783 496 log.go:172] (0xc00098e420) (0xc0006ac460) Stream added, broadcasting: 5\nI0512 10:11:08.432383 496 log.go:172] (0xc00098e420) Reply frame received for 5\nI0512 10:11:08.487310 496 log.go:172] (0xc00098e420) Data frame received for 5\nI0512 10:11:08.487699 496 log.go:172] (0xc0006ac460) (5) Data frame handling\nI0512 10:11:08.487870 496 log.go:172] (0xc0006ac460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:11:08.700394 496 log.go:172] (0xc00098e420) Data frame received for 5\nI0512 10:11:08.700449 496 log.go:172] (0xc0006ac460) (5) Data frame handling\nI0512 10:11:08.700480 496 log.go:172] (0xc00098e420) Data frame received for 3\nI0512 10:11:08.700498 496 log.go:172] (0xc0006ac3c0) (3) Data frame handling\nI0512 10:11:08.700517 496 log.go:172] (0xc0006ac3c0) (3) Data frame sent\nI0512 10:11:08.700655 496 log.go:172] (0xc00098e420) Data frame received for 3\nI0512 10:11:08.700677 496 log.go:172] (0xc0006ac3c0) (3) Data frame handling\nI0512 10:11:08.702706 496 log.go:172] (0xc00098e420) Data frame received for 1\nI0512 10:11:08.702734 496 log.go:172] (0xc0002ce820) (1) Data frame handling\nI0512 10:11:08.702747 496 log.go:172] (0xc0002ce820) (1) Data frame sent\nI0512 10:11:08.702774 496 log.go:172] (0xc00098e420) (0xc0002ce820) Stream removed, broadcasting: 1\nI0512 10:11:08.703144 496 log.go:172] (0xc00098e420) (0xc0002ce820) Stream removed, broadcasting: 1\nI0512 10:11:08.703169 496 log.go:172] (0xc00098e420) (0xc0006ac3c0) Stream removed, broadcasting: 3\nI0512 10:11:08.703232 496 log.go:172] (0xc00098e420) Go away received\nI0512 10:11:08.703403 496 log.go:172] (0xc00098e420) (0xc0006ac460) Stream removed, broadcasting: 5\n" May 12 10:11:08.707: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:11:08.707: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:11:08.707: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 10:11:29.301: INFO: Deleting all statefulset in ns statefulset-8770 May 12 10:11:29.305: INFO: Scaling statefulset ss to 0 May 12 10:11:29.314: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:11:29.317: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:11:29.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8770" for this suite. May 12 10:11:37.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:11:37.892: INFO: namespace statefulset-8770 deletion completed in 8.101926437s • [SLOW TEST:102.935 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:11:37.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 10:11:50.070477 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 10:11:50.070: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:11:50.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2771" for this suite. May 12 10:12:00.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:12:01.066: INFO: namespace gc-2771 deletion completed in 10.769574241s • [SLOW TEST:23.174 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:12:01.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 12 10:12:01.733: INFO: Waiting up to 5m0s for pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4" in namespace "var-expansion-7166" to be "success or failure" May 12 10:12:01.794: INFO: Pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 60.830066ms May 12 10:12:03.798: INFO: Pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064386807s May 12 10:12:05.880: INFO: Pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146403714s May 12 10:12:07.884: INFO: Pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150965704s STEP: Saw pod success May 12 10:12:07.884: INFO: Pod "var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4" satisfied condition "success or failure" May 12 10:12:07.887: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4 container dapi-container: STEP: delete the pod May 12 10:12:08.044: INFO: Waiting for pod var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4 to disappear May 12 10:12:08.097: INFO: Pod var-expansion-fb1c843d-11ed-46e2-8df8-e8bc6a796ca4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:12:08.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7166" for this suite. May 12 10:12:14.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:12:14.339: INFO: namespace var-expansion-7166 deletion completed in 6.237792601s • [SLOW TEST:13.273 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:12:14.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 10:12:19.899: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:12:20.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2908" for this suite. May 12 10:12:28.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:12:28.388: INFO: namespace container-runtime-2908 deletion completed in 8.378373077s • [SLOW TEST:14.049 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:12:28.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-b48b6d37-ef53-423f-a486-886f417bad24 STEP: Creating a pod to test consume secrets May 12 10:12:28.479: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8" in namespace "projected-1085" to be "success or failure" May 12 10:12:28.497: INFO: Pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.099421ms May 12 10:12:30.501: INFO: Pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021926386s May 12 10:12:32.515: INFO: Pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8": Phase="Running", Reason="", readiness=true. Elapsed: 4.03613167s May 12 10:12:34.520: INFO: Pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040954458s STEP: Saw pod success May 12 10:12:34.520: INFO: Pod "pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8" satisfied condition "success or failure" May 12 10:12:34.524: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8 container secret-volume-test: STEP: delete the pod May 12 10:12:34.563: INFO: Waiting for pod pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8 to disappear May 12 10:12:34.576: INFO: Pod pod-projected-secrets-71e2a312-edfa-4265-a5ae-693048ef5be8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:12:34.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1085" for this suite. May 12 10:12:40.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:12:40.680: INFO: namespace projected-1085 deletion completed in 6.101741253s • [SLOW TEST:12.292 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:12:40.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e5e59e17-ca28-4d89-861b-7858eec3f006 in namespace container-probe-2456 May 12 10:12:46.778: INFO: Started pod busybox-e5e59e17-ca28-4d89-861b-7858eec3f006 in namespace container-probe-2456 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:12:46.782: INFO: Initial restart count of pod busybox-e5e59e17-ca28-4d89-861b-7858eec3f006 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:16:48.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2456" for this suite. May 12 10:16:55.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:16:55.460: INFO: namespace container-probe-2456 deletion completed in 6.126437632s • [SLOW TEST:254.779 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:16:55.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 12 10:16:55.598: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:17:11.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9397" for this suite. May 12 10:17:17.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:17:18.057: INFO: namespace pods-9397 deletion completed in 6.140764211s • [SLOW TEST:22.597 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:17:18.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 12 10:17:22.606: INFO: Pod pod-hostip-2f3ef8c2-97e7-476d-86b5-24857c0e5e25 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:17:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9408" for this suite. May 12 10:17:46.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:17:46.740: INFO: namespace pods-9408 deletion completed in 24.131143511s • [SLOW TEST:28.682 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:17:46.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-eac18d49-396d-4849-827e-3cf17e485caf in namespace container-probe-4814 May 12 10:17:52.889: INFO: Started pod liveness-eac18d49-396d-4849-827e-3cf17e485caf in namespace container-probe-4814 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:17:52.892: INFO: Initial restart count of pod liveness-eac18d49-396d-4849-827e-3cf17e485caf is 0 May 12 10:18:06.942: INFO: Restart count of pod container-probe-4814/liveness-eac18d49-396d-4849-827e-3cf17e485caf is now 1 (14.050564848s elapsed) May 12 10:18:29.148: INFO: Restart count of pod container-probe-4814/liveness-eac18d49-396d-4849-827e-3cf17e485caf is now 2 (36.256350879s elapsed) May 12 10:18:47.278: INFO: Restart count of pod container-probe-4814/liveness-eac18d49-396d-4849-827e-3cf17e485caf is now 3 (54.386238087s elapsed) May 12 10:19:10.137: INFO: Restart count of pod container-probe-4814/liveness-eac18d49-396d-4849-827e-3cf17e485caf is now 4 (1m17.245518038s elapsed) May 12 10:20:18.669: INFO: Restart count of pod container-probe-4814/liveness-eac18d49-396d-4849-827e-3cf17e485caf is now 5 (2m25.777531552s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:20:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4814" for this suite. May 12 10:20:25.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:20:25.267: INFO: namespace container-probe-4814 deletion completed in 6.104513745s • [SLOW TEST:158.526 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:20:25.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4339, will wait for the garbage collector to delete the pods May 12 10:20:35.587: INFO: Deleting Job.batch foo took: 19.385473ms May 12 10:20:35.987: INFO: Terminating Job.batch foo pods took: 400.297453ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:21:22.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4339" for this suite. May 12 10:21:30.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:21:30.635: INFO: namespace job-4339 deletion completed in 8.319640593s • [SLOW TEST:65.368 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:21:30.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 10:21:30.865: INFO: Waiting up to 5m0s for pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498" in namespace "emptydir-7767" to be "success or failure" May 12 10:21:30.929: INFO: Pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498": Phase="Pending", Reason="", readiness=false. Elapsed: 63.365025ms May 12 10:21:33.000: INFO: Pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134730156s May 12 10:21:35.003: INFO: Pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137736087s May 12 10:21:37.188: INFO: Pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.322248989s STEP: Saw pod success May 12 10:21:37.188: INFO: Pod "pod-4a6dcfb7-317d-429e-a482-273dd3642498" satisfied condition "success or failure" May 12 10:21:37.642: INFO: Trying to get logs from node iruya-worker pod pod-4a6dcfb7-317d-429e-a482-273dd3642498 container test-container: STEP: delete the pod May 12 10:21:37.711: INFO: Waiting for pod pod-4a6dcfb7-317d-429e-a482-273dd3642498 to disappear May 12 10:21:37.918: INFO: Pod pod-4a6dcfb7-317d-429e-a482-273dd3642498 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:21:37.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7767" for this suite. May 12 10:21:43.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:21:44.018: INFO: namespace emptydir-7767 deletion completed in 6.094908498s • [SLOW TEST:13.382 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:21:44.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8f5ce304-e960-49cc-beab-8b59e14469bc STEP: Creating a pod to test consume configMaps May 12 10:21:44.708: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0" in namespace "projected-7754" to be "success or failure" May 12 10:21:44.780: INFO: Pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 72.250983ms May 12 10:21:46.808: INFO: Pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100008535s May 12 10:21:48.812: INFO: Pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0": Phase="Running", Reason="", readiness=true. Elapsed: 4.103412191s May 12 10:21:50.917: INFO: Pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209171685s STEP: Saw pod success May 12 10:21:50.917: INFO: Pod "pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0" satisfied condition "success or failure" May 12 10:21:50.922: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0 container projected-configmap-volume-test: STEP: delete the pod May 12 10:21:50.979: INFO: Waiting for pod pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0 to disappear May 12 10:21:51.008: INFO: Pod pod-projected-configmaps-1566ab7c-c8b9-4026-893f-7a42c78cb6c0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:21:51.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7754" for this suite. May 12 10:21:57.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:21:57.539: INFO: namespace projected-7754 deletion completed in 6.527979731s • [SLOW TEST:13.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:21:57.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:21:57.781: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2" in namespace "projected-751" to be "success or failure" May 12 10:21:57.812: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.950778ms May 12 10:21:59.985: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203709032s May 12 10:22:02.064: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282467804s May 12 10:22:04.067: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285996787s May 12 10:22:06.072: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Running", Reason="", readiness=true. Elapsed: 8.290432979s May 12 10:22:08.146: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.364381185s STEP: Saw pod success May 12 10:22:08.146: INFO: Pod "downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2" satisfied condition "success or failure" May 12 10:22:08.409: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2 container client-container: STEP: delete the pod May 12 10:22:08.545: INFO: Waiting for pod downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2 to disappear May 12 10:22:08.702: INFO: Pod downwardapi-volume-c4a74cc5-9ebe-4ce9-9bb0-6f267c009cc2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:22:08.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-751" for this suite. May 12 10:22:16.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:22:16.817: INFO: namespace projected-751 deletion completed in 8.111481701s • [SLOW TEST:19.278 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:22:16.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 12 10:22:17.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5437' May 12 10:22:25.859: INFO: stderr: "" May 12 10:22:25.859: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 12 10:22:26.864: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:26.864: INFO: Found 0 / 1 May 12 10:22:28.031: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:28.031: INFO: Found 0 / 1 May 12 10:22:28.864: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:28.864: INFO: Found 0 / 1 May 12 10:22:30.073: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:30.073: INFO: Found 0 / 1 May 12 10:22:30.882: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:30.882: INFO: Found 0 / 1 May 12 10:22:32.139: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:32.139: INFO: Found 1 / 1 May 12 10:22:32.139: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 10:22:32.142: INFO: Selector matched 1 pods for map[app:redis] May 12 10:22:32.142: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 12 10:22:32.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437' May 12 10:22:33.140: INFO: stderr: "" May 12 10:22:33.140: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 10:22:30.771 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 10:22:30.771 # Server started, Redis version 3.2.12\n1:M 12 May 10:22:30.771 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 10:22:30.771 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 12 10:22:33.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437 --tail=1' May 12 10:22:33.345: INFO: stderr: "" May 12 10:22:33.345: INFO: stdout: "1:M 12 May 10:22:30.771 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 12 10:22:33.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437 --limit-bytes=1' May 12 10:22:33.454: INFO: stderr: "" May 12 10:22:33.454: INFO: stdout: " " STEP: exposing timestamps May 12 10:22:33.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437 --tail=1 --timestamps' May 12 10:22:33.545: INFO: stderr: "" May 12 10:22:33.545: INFO: stdout: "2020-05-12T10:22:30.772045329Z 1:M 12 May 10:22:30.771 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 12 10:22:36.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437 --since=1s' May 12 10:22:36.764: INFO: stderr: "" May 12 10:22:36.764: INFO: stdout: "" May 12 10:22:36.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clcjb redis-master --namespace=kubectl-5437 --since=24h' May 12 10:22:36.941: INFO: stderr: "" May 12 10:22:36.941: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 10:22:30.771 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 10:22:30.771 # Server started, Redis version 3.2.12\n1:M 12 May 10:22:30.771 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 10:22:30.771 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 12 10:22:36.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5437' May 12 10:22:37.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:22:37.087: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 12 10:22:37.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5437' May 12 10:22:37.312: INFO: stderr: "No resources found.\n" May 12 10:22:37.312: INFO: stdout: "" May 12 10:22:37.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:22:37.557: INFO: stderr: "" May 12 10:22:37.557: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:22:37.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5437" for this suite. May 12 10:23:02.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:23:02.085: INFO: namespace kubectl-5437 deletion completed in 24.525359737s • [SLOW TEST:45.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:23:02.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9c16e92f-9133-42a4-aad5-60a1231088d5 STEP: Creating a pod to test consume configMaps May 12 10:23:03.207: INFO: Waiting up to 5m0s for pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2" in namespace "configmap-4354" to be "success or failure" May 12 10:23:03.350: INFO: Pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2": Phase="Pending", Reason="", readiness=false. Elapsed: 142.885879ms May 12 10:23:05.354: INFO: Pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147103581s May 12 10:23:07.358: INFO: Pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150355612s May 12 10:23:09.361: INFO: Pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153965627s STEP: Saw pod success May 12 10:23:09.361: INFO: Pod "pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2" satisfied condition "success or failure" May 12 10:23:09.364: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2 container configmap-volume-test: STEP: delete the pod May 12 10:23:09.890: INFO: Waiting for pod pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2 to disappear May 12 10:23:09.900: INFO: Pod pod-configmaps-bebc7a40-29e5-4929-aeeb-0cf369e098e2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:23:09.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4354" for this suite. May 12 10:23:18.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:23:18.122: INFO: namespace configmap-4354 deletion completed in 8.21930027s • [SLOW TEST:16.036 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:23:18.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:23:49.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9479" for this suite. May 12 10:23:55.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:23:55.759: INFO: namespace namespaces-9479 deletion completed in 6.251570818s STEP: Destroying namespace "nsdeletetest-2299" for this suite. May 12 10:23:55.761: INFO: Namespace nsdeletetest-2299 was already deleted STEP: Destroying namespace "nsdeletetest-4672" for this suite. May 12 10:24:01.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:24:01.876: INFO: namespace nsdeletetest-4672 deletion completed in 6.114696922s • [SLOW TEST:43.753 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:24:01.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:24:10.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-123" for this suite. May 12 10:24:16.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:24:16.717: INFO: namespace watch-123 deletion completed in 6.186694578s • [SLOW TEST:14.841 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:24:16.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:24:16.940: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:24:18.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4889" for this suite. May 12 10:24:25.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:24:25.770: INFO: namespace custom-resource-definition-4889 deletion completed in 7.344623982s • [SLOW TEST:9.052 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:24:25.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3863 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 12 10:24:27.719: INFO: Found 0 stateful pods, waiting for 3 May 12 10:24:38.009: INFO: Found 2 stateful pods, waiting for 3 May 12 10:24:47.766: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:24:47.766: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:24:47.766: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 10:24:47.817: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 10:24:58.473: INFO: Updating stateful set ss2 May 12 10:24:59.479: INFO: Waiting for Pod statefulset-3863/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 12 10:25:10.648: INFO: Found 2 stateful pods, waiting for 3 May 12 10:25:20.652: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:25:20.652: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:25:20.652: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 10:25:30.653: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:25:30.653: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:25:30.653: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 10:25:30.674: INFO: Updating stateful set ss2 May 12 10:25:30.850: INFO: Waiting for Pod statefulset-3863/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:25:40.856: INFO: Waiting for Pod statefulset-3863/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:25:50.982: INFO: Updating stateful set ss2 May 12 10:25:51.317: INFO: Waiting for StatefulSet statefulset-3863/ss2 to complete update May 12 10:25:51.317: INFO: Waiting for Pod statefulset-3863/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:26:01.650: INFO: Waiting for StatefulSet statefulset-3863/ss2 to complete update May 12 10:26:01.650: INFO: Waiting for Pod statefulset-3863/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:26:11.325: INFO: Waiting for StatefulSet statefulset-3863/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 10:26:21.326: INFO: Deleting all statefulset in ns statefulset-3863 May 12 10:26:21.329: INFO: Scaling statefulset ss2 to 0 May 12 10:26:51.400: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:26:51.403: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:26:51.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3863" for this suite. May 12 10:27:01.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:27:01.566: INFO: namespace statefulset-3863 deletion completed in 10.095275218s • [SLOW TEST:155.796 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:27:01.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:27:02.645: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 10:27:08.011: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 10:27:10.118: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 10:27:12.190: INFO: Creating deployment "test-rollover-deployment" May 12 10:27:12.237: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 10:27:14.243: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 10:27:14.248: INFO: Ensure that both replica sets have 1 created replica May 12 10:27:14.252: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 10:27:14.258: INFO: Updating deployment test-rollover-deployment May 12 10:27:14.258: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 10:27:16.520: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 10:27:16.525: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 10:27:16.529: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:16.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876034, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:18.535: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:18.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876034, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:20.772: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:20.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876039, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:22.553: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:22.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876039, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:24.643: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:24.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876039, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:26.536: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:26.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876039, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:28.534: INFO: all replica sets need to contain the pod-template-hash label May 12 10:27:28.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876039, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724876032, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:27:30.535: INFO: May 12 10:27:30.535: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 10:27:30.548: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3067,SelfLink:/apis/apps/v1/namespaces/deployment-3067/deployments/test-rollover-deployment,UID:1ef6a9d2-c6e6-4b7d-bf65-eee9701a25d4,ResourceVersion:10454574,Generation:2,CreationTimestamp:2020-05-12 10:27:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 10:27:12 +0000 UTC 2020-05-12 10:27:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 10:27:29 +0000 UTC 2020-05-12 10:27:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 10:27:30.554: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3067,SelfLink:/apis/apps/v1/namespaces/deployment-3067/replicasets/test-rollover-deployment-854595fc44,UID:e94ad2d3-afc9-4e8d-9508-07b0dee0520a,ResourceVersion:10454563,Generation:2,CreationTimestamp:2020-05-12 10:27:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1ef6a9d2-c6e6-4b7d-bf65-eee9701a25d4 0xc000f6abc7 0xc000f6abc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 10:27:30.554: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 10:27:30.554: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3067,SelfLink:/apis/apps/v1/namespaces/deployment-3067/replicasets/test-rollover-controller,UID:aa51bc50-b473-4f7f-912b-1bd53db5c4f9,ResourceVersion:10454572,Generation:2,CreationTimestamp:2020-05-12 10:27:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1ef6a9d2-c6e6-4b7d-bf65-eee9701a25d4 0xc000f6a9df 0xc000f6a9f0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:27:30.554: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3067,SelfLink:/apis/apps/v1/namespaces/deployment-3067/replicasets/test-rollover-deployment-9b8b997cf,UID:edeb8676-7b1f-429d-ad34-76eb4c3a4917,ResourceVersion:10454522,Generation:2,CreationTimestamp:2020-05-12 10:27:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1ef6a9d2-c6e6-4b7d-bf65-eee9701a25d4 0xc000f6ad60 0xc000f6ad61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:27:30.556: INFO: Pod "test-rollover-deployment-854595fc44-5mlk9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5mlk9,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3067,SelfLink:/api/v1/namespaces/deployment-3067/pods/test-rollover-deployment-854595fc44-5mlk9,UID:8970b98c-e47e-482b-b167-96f41eb3dfd6,ResourceVersion:10454541,Generation:0,CreationTimestamp:2020-05-12 10:27:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 e94ad2d3-afc9-4e8d-9508-07b0dee0520a 0xc001ac4547 0xc001ac4548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-czf25 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-czf25,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-czf25 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ac47d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ac47f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:27:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:27:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:27:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:27:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.110,StartTime:2020-05-12 10:27:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 10:27:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://dffc51c56bf056caf165b6f6cce81f2259bfd1f2c6858d546e0d4c0d36cbeff3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:27:30.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3067" for this suite. May 12 10:27:38.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:27:38.730: INFO: namespace deployment-3067 deletion completed in 8.17069737s • [SLOW TEST:37.163 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:27:38.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:27:38.811: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.71785ms) May 12 10:27:38.814: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.412101ms) May 12 10:27:38.816: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.285732ms) May 12 10:27:38.850: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 34.101351ms) May 12 10:27:38.855: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.339322ms) May 12 10:27:38.858: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.179102ms) May 12 10:27:38.860: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.32473ms) May 12 10:27:38.863: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.18048ms) May 12 10:27:38.864: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.941536ms) May 12 10:27:38.867: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.094978ms) May 12 10:27:38.868: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.898456ms) May 12 10:27:38.871: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.268599ms) May 12 10:27:38.873: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.403207ms) May 12 10:27:38.876: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.393327ms) May 12 10:27:38.878: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.674481ms) May 12 10:27:38.881: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.031958ms) May 12 10:27:38.884: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.487358ms) May 12 10:27:38.887: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.952313ms) May 12 10:27:38.890: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.72254ms) May 12 10:27:38.892: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.191652ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:27:38.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6438" for this suite. May 12 10:27:45.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:27:45.398: INFO: namespace proxy-6438 deletion completed in 6.504000761s • [SLOW TEST:6.668 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:27:45.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-7ba6e8b8-32a6-4fb9-bea9-174cdb90fca6 STEP: Creating configMap with name cm-test-opt-upd-541e4227-7340-433a-a356-45ed124a3eb7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7ba6e8b8-32a6-4fb9-bea9-174cdb90fca6 STEP: Updating configmap cm-test-opt-upd-541e4227-7340-433a-a356-45ed124a3eb7 STEP: Creating configMap with name cm-test-opt-create-ef54886f-dcf0-47dc-9180-f960f2506ed5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:27:58.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9904" for this suite. May 12 10:28:25.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:28:25.506: INFO: namespace configmap-9904 deletion completed in 26.931096076s • [SLOW TEST:40.107 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:28:25.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:28:25.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-831' May 12 10:28:26.171: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 10:28:26.171: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 12 10:28:28.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-831' May 12 10:28:29.058: INFO: stderr: "" May 12 10:28:29.059: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:28:29.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-831" for this suite. May 12 10:28:37.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:28:37.787: INFO: namespace kubectl-831 deletion completed in 8.709920172s • [SLOW TEST:12.281 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:28:37.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 10:28:38.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2361,SelfLink:/api/v1/namespaces/watch-2361/configmaps/e2e-watch-test-resource-version,UID:3082a378-be41-4b21-93ae-f2f6a2edb05b,ResourceVersion:10454826,Generation:0,CreationTimestamp:2020-05-12 10:28:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 10:28:38.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2361,SelfLink:/api/v1/namespaces/watch-2361/configmaps/e2e-watch-test-resource-version,UID:3082a378-be41-4b21-93ae-f2f6a2edb05b,ResourceVersion:10454827,Generation:0,CreationTimestamp:2020-05-12 10:28:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:28:38.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2361" for this suite. May 12 10:28:45.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:28:45.509: INFO: namespace watch-2361 deletion completed in 6.626958849s • [SLOW TEST:7.721 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:28:45.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 10:28:46.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9968' May 12 10:28:46.100: INFO: stderr: "" May 12 10:28:46.100: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 12 10:28:46.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9968' May 12 10:28:48.571: INFO: stderr: "" May 12 10:28:48.571: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:28:48.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9968" for this suite. May 12 10:28:54.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:28:54.655: INFO: namespace kubectl-9968 deletion completed in 6.07832274s • [SLOW TEST:9.146 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:28:54.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:28:54.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251" in namespace "projected-7331" to be "success or failure" May 12 10:28:54.985: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Pending", Reason="", readiness=false. Elapsed: 26.80114ms May 12 10:28:56.988: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030267041s May 12 10:28:58.993: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03481036s May 12 10:29:01.396: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437784436s May 12 10:29:03.407: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Running", Reason="", readiness=true. Elapsed: 8.449131782s May 12 10:29:05.412: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.453609369s STEP: Saw pod success May 12 10:29:05.412: INFO: Pod "downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251" satisfied condition "success or failure" May 12 10:29:05.415: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251 container client-container: STEP: delete the pod May 12 10:29:06.066: INFO: Waiting for pod downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251 to disappear May 12 10:29:06.282: INFO: Pod downwardapi-volume-8150e997-1c28-43f3-a1e7-2b3767573251 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:29:06.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7331" for this suite. May 12 10:29:12.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:29:12.846: INFO: namespace projected-7331 deletion completed in 6.561310358s • [SLOW TEST:18.192 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:29:12.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3334/configmap-test-c6548dc3-532e-483b-8272-32764c0b79da STEP: Creating a pod to test consume configMaps May 12 10:29:13.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c" in namespace "configmap-3334" to be "success or failure" May 12 10:29:13.054: INFO: Pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.384802ms May 12 10:29:15.057: INFO: Pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056643575s May 12 10:29:17.239: INFO: Pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238639974s May 12 10:29:19.244: INFO: Pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243160039s STEP: Saw pod success May 12 10:29:19.244: INFO: Pod "pod-configmaps-edca104d-de98-481e-be11-46f31455310c" satisfied condition "success or failure" May 12 10:29:19.247: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-edca104d-de98-481e-be11-46f31455310c container env-test: STEP: delete the pod May 12 10:29:19.307: INFO: Waiting for pod pod-configmaps-edca104d-de98-481e-be11-46f31455310c to disappear May 12 10:29:19.404: INFO: Pod pod-configmaps-edca104d-de98-481e-be11-46f31455310c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:29:19.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3334" for this suite. May 12 10:29:27.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:29:27.553: INFO: namespace configmap-3334 deletion completed in 8.07867708s • [SLOW TEST:14.706 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:29:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 12 10:29:27.992: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5922" to be "success or failure" May 12 10:29:28.037: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 45.06676ms May 12 10:29:30.041: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048938388s May 12 10:29:32.162: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170152425s May 12 10:29:34.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174064221s May 12 10:29:36.300: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307763558s May 12 10:29:38.306: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.313809626s May 12 10:29:40.310: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.317630083s STEP: Saw pod success May 12 10:29:40.310: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 12 10:29:40.312: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 10:29:40.479: INFO: Waiting for pod pod-host-path-test to disappear May 12 10:29:40.493: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:29:40.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5922" for this suite. May 12 10:29:48.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:29:48.660: INFO: namespace hostpath-5922 deletion completed in 8.163797892s • [SLOW TEST:21.106 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:29:48.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-49fd7c4b-f5e3-4117-a415-5c67855837db STEP: Creating a pod to test consume configMaps May 12 10:29:48.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97" in namespace "configmap-4688" to be "success or failure" May 12 10:29:48.971: INFO: Pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97": Phase="Pending", Reason="", readiness=false. Elapsed: 33.694037ms May 12 10:29:50.983: INFO: Pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04507848s May 12 10:29:52.987: INFO: Pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049657693s May 12 10:29:54.991: INFO: Pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052867122s STEP: Saw pod success May 12 10:29:54.991: INFO: Pod "pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97" satisfied condition "success or failure" May 12 10:29:54.992: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97 container configmap-volume-test: STEP: delete the pod May 12 10:29:55.202: INFO: Waiting for pod pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97 to disappear May 12 10:29:55.263: INFO: Pod pod-configmaps-497a77ec-c752-466b-9cba-01932df1ca97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:29:55.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4688" for this suite. May 12 10:30:01.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:02.019: INFO: namespace configmap-4688 deletion completed in 6.752723354s • [SLOW TEST:13.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:30:02.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5274fe30-957e-43c0-a1b2-de2a1825daf1 STEP: Creating a pod to test consume secrets May 12 10:30:02.445: INFO: Waiting up to 5m0s for pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1" in namespace "secrets-9124" to be "success or failure" May 12 10:30:02.455: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.545239ms May 12 10:30:04.458: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013242263s May 12 10:30:06.463: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017457311s May 12 10:30:08.467: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021896252s May 12 10:30:10.703: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.25744272s STEP: Saw pod success May 12 10:30:10.703: INFO: Pod "pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1" satisfied condition "success or failure" May 12 10:30:10.706: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1 container secret-volume-test: STEP: delete the pod May 12 10:30:11.039: INFO: Waiting for pod pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1 to disappear May 12 10:30:11.655: INFO: Pod pod-secrets-5459147e-c8e8-429e-afa1-bf6fde0909f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:30:11.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9124" for this suite. May 12 10:30:19.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:20.060: INFO: namespace secrets-9124 deletion completed in 8.400412963s • [SLOW TEST:18.040 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:30:20.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 10:30:20.206: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 10:30:20.214: INFO: Waiting for terminating namespaces to be deleted... May 12 10:30:20.216: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 10:30:20.219: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 10:30:20.219: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:30:20.219: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 10:30:20.219: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:30:20.219: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 10:30:20.227: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 10:30:20.227: INFO: Container coredns ready: true, restart count 0 May 12 10:30:20.227: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 10:30:20.227: INFO: Container coredns ready: true, restart count 0 May 12 10:30:20.227: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 10:30:20.227: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:30:20.227: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 10:30:20.227: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-58b994ab-3de3-46c0-b22f-14d0624039c7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-58b994ab-3de3-46c0-b22f-14d0624039c7 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-58b994ab-3de3-46c0-b22f-14d0624039c7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:30:30.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-315" for this suite. May 12 10:30:40.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:40.595: INFO: namespace sched-pred-315 deletion completed in 10.121812414s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.535 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:30:40.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:30:40.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79" in namespace "projected-4208" to be "success or failure" May 12 10:30:40.786: INFO: Pod "downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79": Phase="Pending", Reason="", readiness=false. Elapsed: 57.360014ms May 12 10:30:42.790: INFO: Pod "downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060623503s May 12 10:30:44.793: INFO: Pod "downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06418028s STEP: Saw pod success May 12 10:30:44.793: INFO: Pod "downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79" satisfied condition "success or failure" May 12 10:30:44.795: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79 container client-container: STEP: delete the pod May 12 10:30:44.984: INFO: Waiting for pod downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79 to disappear May 12 10:30:45.199: INFO: Pod downwardapi-volume-c0187d58-96e5-403c-81d6-f5118492fa79 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:30:45.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4208" for this suite. May 12 10:30:53.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:30:53.501: INFO: namespace projected-4208 deletion completed in 8.299137726s • [SLOW TEST:12.907 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:30:53.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 10:31:16.200: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.200: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.229581 6 log.go:172] (0xc0023520b0) (0xc001f1fa40) Create stream I0512 10:31:16.229612 6 log.go:172] (0xc0023520b0) (0xc001f1fa40) Stream added, broadcasting: 1 I0512 10:31:16.232312 6 log.go:172] (0xc0023520b0) Reply frame received for 1 I0512 10:31:16.232357 6 log.go:172] (0xc0023520b0) (0xc002300000) Create stream I0512 10:31:16.232374 6 log.go:172] (0xc0023520b0) (0xc002300000) Stream added, broadcasting: 3 I0512 10:31:16.233352 6 log.go:172] (0xc0023520b0) Reply frame received for 3 I0512 10:31:16.233393 6 log.go:172] (0xc0023520b0) (0xc001f1fae0) Create stream I0512 10:31:16.233406 6 log.go:172] (0xc0023520b0) (0xc001f1fae0) Stream added, broadcasting: 5 I0512 10:31:16.234137 6 log.go:172] (0xc0023520b0) Reply frame received for 5 I0512 10:31:16.301540 6 log.go:172] (0xc0023520b0) Data frame received for 3 I0512 10:31:16.301573 6 log.go:172] (0xc002300000) (3) Data frame handling I0512 10:31:16.301601 6 log.go:172] (0xc0023520b0) Data frame received for 5 I0512 10:31:16.301634 6 log.go:172] (0xc001f1fae0) (5) Data frame handling I0512 10:31:16.301667 6 log.go:172] (0xc002300000) (3) Data frame sent I0512 10:31:16.301685 6 log.go:172] (0xc0023520b0) Data frame received for 3 I0512 10:31:16.301701 6 log.go:172] (0xc002300000) (3) Data frame handling I0512 10:31:16.303168 6 log.go:172] (0xc0023520b0) Data frame received for 1 I0512 10:31:16.303185 6 log.go:172] (0xc001f1fa40) (1) Data frame handling I0512 10:31:16.303202 6 log.go:172] (0xc001f1fa40) (1) Data frame sent I0512 10:31:16.303213 6 log.go:172] (0xc0023520b0) (0xc001f1fa40) Stream removed, broadcasting: 1 I0512 10:31:16.303265 6 log.go:172] (0xc0023520b0) Go away received I0512 10:31:16.303284 6 log.go:172] (0xc0023520b0) (0xc001f1fa40) Stream removed, broadcasting: 1 I0512 10:31:16.303293 6 log.go:172] (0xc0023520b0) (0xc002300000) Stream removed, broadcasting: 3 I0512 10:31:16.303343 6 log.go:172] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0xc001f1fae0)} I0512 10:31:16.303400 6 log.go:172] (0xc0023520b0) (0xc001f1fae0) Stream removed, broadcasting: 5 May 12 10:31:16.303: INFO: Exec stderr: "" May 12 10:31:16.303: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.303: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.573632 6 log.go:172] (0xc000b93080) (0xc00268e3c0) Create stream I0512 10:31:16.573673 6 log.go:172] (0xc000b93080) (0xc00268e3c0) Stream added, broadcasting: 1 I0512 10:31:16.576868 6 log.go:172] (0xc000b93080) Reply frame received for 1 I0512 10:31:16.576915 6 log.go:172] (0xc000b93080) (0xc0014ccf00) Create stream I0512 10:31:16.576923 6 log.go:172] (0xc000b93080) (0xc0014ccf00) Stream added, broadcasting: 3 I0512 10:31:16.578122 6 log.go:172] (0xc000b93080) Reply frame received for 3 I0512 10:31:16.578159 6 log.go:172] (0xc000b93080) (0xc0014ccfa0) Create stream I0512 10:31:16.578171 6 log.go:172] (0xc000b93080) (0xc0014ccfa0) Stream added, broadcasting: 5 I0512 10:31:16.578998 6 log.go:172] (0xc000b93080) Reply frame received for 5 I0512 10:31:16.631152 6 log.go:172] (0xc000b93080) Data frame received for 5 I0512 10:31:16.631192 6 log.go:172] (0xc0014ccfa0) (5) Data frame handling I0512 10:31:16.631219 6 log.go:172] (0xc000b93080) Data frame received for 3 I0512 10:31:16.631244 6 log.go:172] (0xc0014ccf00) (3) Data frame handling I0512 10:31:16.631275 6 log.go:172] (0xc0014ccf00) (3) Data frame sent I0512 10:31:16.631284 6 log.go:172] (0xc000b93080) Data frame received for 3 I0512 10:31:16.631292 6 log.go:172] (0xc0014ccf00) (3) Data frame handling I0512 10:31:16.632291 6 log.go:172] (0xc000b93080) Data frame received for 1 I0512 10:31:16.632309 6 log.go:172] (0xc00268e3c0) (1) Data frame handling I0512 10:31:16.632326 6 log.go:172] (0xc00268e3c0) (1) Data frame sent I0512 10:31:16.632350 6 log.go:172] (0xc000b93080) (0xc00268e3c0) Stream removed, broadcasting: 1 I0512 10:31:16.632400 6 log.go:172] (0xc000b93080) Go away received I0512 10:31:16.632431 6 log.go:172] (0xc000b93080) (0xc00268e3c0) Stream removed, broadcasting: 1 I0512 10:31:16.632447 6 log.go:172] (0xc000b93080) (0xc0014ccf00) Stream removed, broadcasting: 3 I0512 10:31:16.632454 6 log.go:172] (0xc000b93080) (0xc0014ccfa0) Stream removed, broadcasting: 5 May 12 10:31:16.632: INFO: Exec stderr: "" May 12 10:31:16.632: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.632: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.655621 6 log.go:172] (0xc001dd8e70) (0xc0023003c0) Create stream I0512 10:31:16.655642 6 log.go:172] (0xc001dd8e70) (0xc0023003c0) Stream added, broadcasting: 1 I0512 10:31:16.657664 6 log.go:172] (0xc001dd8e70) Reply frame received for 1 I0512 10:31:16.657702 6 log.go:172] (0xc001dd8e70) (0xc00268e500) Create stream I0512 10:31:16.657711 6 log.go:172] (0xc001dd8e70) (0xc00268e500) Stream added, broadcasting: 3 I0512 10:31:16.658419 6 log.go:172] (0xc001dd8e70) Reply frame received for 3 I0512 10:31:16.658448 6 log.go:172] (0xc001dd8e70) (0xc00268e640) Create stream I0512 10:31:16.658460 6 log.go:172] (0xc001dd8e70) (0xc00268e640) Stream added, broadcasting: 5 I0512 10:31:16.659161 6 log.go:172] (0xc001dd8e70) Reply frame received for 5 I0512 10:31:16.709815 6 log.go:172] (0xc001dd8e70) Data frame received for 5 I0512 10:31:16.709860 6 log.go:172] (0xc00268e640) (5) Data frame handling I0512 10:31:16.709885 6 log.go:172] (0xc001dd8e70) Data frame received for 3 I0512 10:31:16.709898 6 log.go:172] (0xc00268e500) (3) Data frame handling I0512 10:31:16.709918 6 log.go:172] (0xc00268e500) (3) Data frame sent I0512 10:31:16.709924 6 log.go:172] (0xc001dd8e70) Data frame received for 3 I0512 10:31:16.709930 6 log.go:172] (0xc00268e500) (3) Data frame handling I0512 10:31:16.711312 6 log.go:172] (0xc001dd8e70) Data frame received for 1 I0512 10:31:16.711355 6 log.go:172] (0xc0023003c0) (1) Data frame handling I0512 10:31:16.711379 6 log.go:172] (0xc0023003c0) (1) Data frame sent I0512 10:31:16.711404 6 log.go:172] (0xc001dd8e70) (0xc0023003c0) Stream removed, broadcasting: 1 I0512 10:31:16.711431 6 log.go:172] (0xc001dd8e70) Go away received I0512 10:31:16.711603 6 log.go:172] (0xc001dd8e70) (0xc0023003c0) Stream removed, broadcasting: 1 I0512 10:31:16.711628 6 log.go:172] (0xc001dd8e70) (0xc00268e500) Stream removed, broadcasting: 3 I0512 10:31:16.711638 6 log.go:172] (0xc001dd8e70) (0xc00268e640) Stream removed, broadcasting: 5 May 12 10:31:16.711: INFO: Exec stderr: "" May 12 10:31:16.711: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.711: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.766161 6 log.go:172] (0xc001e7d3f0) (0xc0014cd400) Create stream I0512 10:31:16.766195 6 log.go:172] (0xc001e7d3f0) (0xc0014cd400) Stream added, broadcasting: 1 I0512 10:31:16.769345 6 log.go:172] (0xc001e7d3f0) Reply frame received for 1 I0512 10:31:16.769389 6 log.go:172] (0xc001e7d3f0) (0xc001f1fb80) Create stream I0512 10:31:16.769401 6 log.go:172] (0xc001e7d3f0) (0xc001f1fb80) Stream added, broadcasting: 3 I0512 10:31:16.770069 6 log.go:172] (0xc001e7d3f0) Reply frame received for 3 I0512 10:31:16.770098 6 log.go:172] (0xc001e7d3f0) (0xc001f1fc20) Create stream I0512 10:31:16.770107 6 log.go:172] (0xc001e7d3f0) (0xc001f1fc20) Stream added, broadcasting: 5 I0512 10:31:16.770777 6 log.go:172] (0xc001e7d3f0) Reply frame received for 5 I0512 10:31:16.854901 6 log.go:172] (0xc001e7d3f0) Data frame received for 5 I0512 10:31:16.854936 6 log.go:172] (0xc001e7d3f0) Data frame received for 3 I0512 10:31:16.854973 6 log.go:172] (0xc001f1fb80) (3) Data frame handling I0512 10:31:16.854986 6 log.go:172] (0xc001f1fb80) (3) Data frame sent I0512 10:31:16.854995 6 log.go:172] (0xc001e7d3f0) Data frame received for 3 I0512 10:31:16.855003 6 log.go:172] (0xc001f1fb80) (3) Data frame handling I0512 10:31:16.855028 6 log.go:172] (0xc001f1fc20) (5) Data frame handling I0512 10:31:16.855901 6 log.go:172] (0xc001e7d3f0) Data frame received for 1 I0512 10:31:16.855967 6 log.go:172] (0xc0014cd400) (1) Data frame handling I0512 10:31:16.856033 6 log.go:172] (0xc0014cd400) (1) Data frame sent I0512 10:31:16.856062 6 log.go:172] (0xc001e7d3f0) (0xc0014cd400) Stream removed, broadcasting: 1 I0512 10:31:16.856107 6 log.go:172] (0xc001e7d3f0) Go away received I0512 10:31:16.856172 6 log.go:172] (0xc001e7d3f0) (0xc0014cd400) Stream removed, broadcasting: 1 I0512 10:31:16.856191 6 log.go:172] (0xc001e7d3f0) (0xc001f1fb80) Stream removed, broadcasting: 3 I0512 10:31:16.856199 6 log.go:172] (0xc001e7d3f0) (0xc001f1fc20) Stream removed, broadcasting: 5 May 12 10:31:16.856: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 10:31:16.856: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.856: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.882079 6 log.go:172] (0xc001dd9ef0) (0xc002300780) Create stream I0512 10:31:16.882121 6 log.go:172] (0xc001dd9ef0) (0xc002300780) Stream added, broadcasting: 1 I0512 10:31:16.884281 6 log.go:172] (0xc001dd9ef0) Reply frame received for 1 I0512 10:31:16.884309 6 log.go:172] (0xc001dd9ef0) (0xc001290e60) Create stream I0512 10:31:16.884318 6 log.go:172] (0xc001dd9ef0) (0xc001290e60) Stream added, broadcasting: 3 I0512 10:31:16.885067 6 log.go:172] (0xc001dd9ef0) Reply frame received for 3 I0512 10:31:16.885092 6 log.go:172] (0xc001dd9ef0) (0xc001291180) Create stream I0512 10:31:16.885102 6 log.go:172] (0xc001dd9ef0) (0xc001291180) Stream added, broadcasting: 5 I0512 10:31:16.885996 6 log.go:172] (0xc001dd9ef0) Reply frame received for 5 I0512 10:31:16.945713 6 log.go:172] (0xc001dd9ef0) Data frame received for 5 I0512 10:31:16.945762 6 log.go:172] (0xc001dd9ef0) Data frame received for 3 I0512 10:31:16.945807 6 log.go:172] (0xc001290e60) (3) Data frame handling I0512 10:31:16.945830 6 log.go:172] (0xc001291180) (5) Data frame handling I0512 10:31:16.945864 6 log.go:172] (0xc001290e60) (3) Data frame sent I0512 10:31:16.945879 6 log.go:172] (0xc001dd9ef0) Data frame received for 3 I0512 10:31:16.945889 6 log.go:172] (0xc001290e60) (3) Data frame handling I0512 10:31:16.946886 6 log.go:172] (0xc001dd9ef0) Data frame received for 1 I0512 10:31:16.946916 6 log.go:172] (0xc002300780) (1) Data frame handling I0512 10:31:16.946950 6 log.go:172] (0xc002300780) (1) Data frame sent I0512 10:31:16.946988 6 log.go:172] (0xc001dd9ef0) (0xc002300780) Stream removed, broadcasting: 1 I0512 10:31:16.947019 6 log.go:172] (0xc001dd9ef0) Go away received I0512 10:31:16.947112 6 log.go:172] (0xc001dd9ef0) (0xc002300780) Stream removed, broadcasting: 1 I0512 10:31:16.947137 6 log.go:172] (0xc001dd9ef0) (0xc001290e60) Stream removed, broadcasting: 3 I0512 10:31:16.947170 6 log.go:172] (0xc001dd9ef0) (0xc001291180) Stream removed, broadcasting: 5 May 12 10:31:16.947: INFO: Exec stderr: "" May 12 10:31:16.947: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:16.947: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:16.978567 6 log.go:172] (0xc002353810) (0xc001080320) Create stream I0512 10:31:16.978597 6 log.go:172] (0xc002353810) (0xc001080320) Stream added, broadcasting: 1 I0512 10:31:16.981004 6 log.go:172] (0xc002353810) Reply frame received for 1 I0512 10:31:16.981031 6 log.go:172] (0xc002353810) (0xc0010803c0) Create stream I0512 10:31:16.981043 6 log.go:172] (0xc002353810) (0xc0010803c0) Stream added, broadcasting: 3 I0512 10:31:16.982120 6 log.go:172] (0xc002353810) Reply frame received for 3 I0512 10:31:16.982151 6 log.go:172] (0xc002353810) (0xc0014cd4a0) Create stream I0512 10:31:16.982167 6 log.go:172] (0xc002353810) (0xc0014cd4a0) Stream added, broadcasting: 5 I0512 10:31:16.982906 6 log.go:172] (0xc002353810) Reply frame received for 5 I0512 10:31:17.037726 6 log.go:172] (0xc002353810) Data frame received for 5 I0512 10:31:17.037746 6 log.go:172] (0xc0014cd4a0) (5) Data frame handling I0512 10:31:17.037978 6 log.go:172] (0xc002353810) Data frame received for 3 I0512 10:31:17.038043 6 log.go:172] (0xc0010803c0) (3) Data frame handling I0512 10:31:17.038085 6 log.go:172] (0xc0010803c0) (3) Data frame sent I0512 10:31:17.038099 6 log.go:172] (0xc002353810) Data frame received for 3 I0512 10:31:17.038107 6 log.go:172] (0xc0010803c0) (3) Data frame handling I0512 10:31:17.038992 6 log.go:172] (0xc002353810) Data frame received for 1 I0512 10:31:17.039015 6 log.go:172] (0xc001080320) (1) Data frame handling I0512 10:31:17.039026 6 log.go:172] (0xc001080320) (1) Data frame sent I0512 10:31:17.039065 6 log.go:172] (0xc002353810) (0xc001080320) Stream removed, broadcasting: 1 I0512 10:31:17.039093 6 log.go:172] (0xc002353810) Go away received I0512 10:31:17.039133 6 log.go:172] (0xc002353810) (0xc001080320) Stream removed, broadcasting: 1 I0512 10:31:17.039149 6 log.go:172] (0xc002353810) (0xc0010803c0) Stream removed, broadcasting: 3 I0512 10:31:17.039158 6 log.go:172] (0xc002353810) (0xc0014cd4a0) Stream removed, broadcasting: 5 May 12 10:31:17.039: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 12 10:31:17.039: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:17.039: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:17.066944 6 log.go:172] (0xc003030dc0) (0xc002300a00) Create stream I0512 10:31:17.066972 6 log.go:172] (0xc003030dc0) (0xc002300a00) Stream added, broadcasting: 1 I0512 10:31:17.069338 6 log.go:172] (0xc003030dc0) Reply frame received for 1 I0512 10:31:17.069380 6 log.go:172] (0xc003030dc0) (0xc001291220) Create stream I0512 10:31:17.069391 6 log.go:172] (0xc003030dc0) (0xc001291220) Stream added, broadcasting: 3 I0512 10:31:17.070099 6 log.go:172] (0xc003030dc0) Reply frame received for 3 I0512 10:31:17.070136 6 log.go:172] (0xc003030dc0) (0xc0012912c0) Create stream I0512 10:31:17.070148 6 log.go:172] (0xc003030dc0) (0xc0012912c0) Stream added, broadcasting: 5 I0512 10:31:17.070781 6 log.go:172] (0xc003030dc0) Reply frame received for 5 I0512 10:31:17.140169 6 log.go:172] (0xc003030dc0) Data frame received for 5 I0512 10:31:17.140205 6 log.go:172] (0xc0012912c0) (5) Data frame handling I0512 10:31:17.140238 6 log.go:172] (0xc003030dc0) Data frame received for 3 I0512 10:31:17.140283 6 log.go:172] (0xc001291220) (3) Data frame handling I0512 10:31:17.140311 6 log.go:172] (0xc001291220) (3) Data frame sent I0512 10:31:17.140323 6 log.go:172] (0xc003030dc0) Data frame received for 3 I0512 10:31:17.140332 6 log.go:172] (0xc001291220) (3) Data frame handling I0512 10:31:17.141371 6 log.go:172] (0xc003030dc0) Data frame received for 1 I0512 10:31:17.141389 6 log.go:172] (0xc002300a00) (1) Data frame handling I0512 10:31:17.141397 6 log.go:172] (0xc002300a00) (1) Data frame sent I0512 10:31:17.141518 6 log.go:172] (0xc003030dc0) (0xc002300a00) Stream removed, broadcasting: 1 I0512 10:31:17.141557 6 log.go:172] (0xc003030dc0) Go away received I0512 10:31:17.141608 6 log.go:172] (0xc003030dc0) (0xc002300a00) Stream removed, broadcasting: 1 I0512 10:31:17.141623 6 log.go:172] (0xc003030dc0) (0xc001291220) Stream removed, broadcasting: 3 I0512 10:31:17.141636 6 log.go:172] (0xc003030dc0) (0xc0012912c0) Stream removed, broadcasting: 5 May 12 10:31:17.141: INFO: Exec stderr: "" May 12 10:31:17.141: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:17.141: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:17.170226 6 log.go:172] (0xc00348a8f0) (0xc0014cdc20) Create stream I0512 10:31:17.170257 6 log.go:172] (0xc00348a8f0) (0xc0014cdc20) Stream added, broadcasting: 1 I0512 10:31:17.172622 6 log.go:172] (0xc00348a8f0) Reply frame received for 1 I0512 10:31:17.172660 6 log.go:172] (0xc00348a8f0) (0xc002300aa0) Create stream I0512 10:31:17.172672 6 log.go:172] (0xc00348a8f0) (0xc002300aa0) Stream added, broadcasting: 3 I0512 10:31:17.173813 6 log.go:172] (0xc00348a8f0) Reply frame received for 3 I0512 10:31:17.173844 6 log.go:172] (0xc00348a8f0) (0xc001080500) Create stream I0512 10:31:17.173860 6 log.go:172] (0xc00348a8f0) (0xc001080500) Stream added, broadcasting: 5 I0512 10:31:17.174740 6 log.go:172] (0xc00348a8f0) Reply frame received for 5 I0512 10:31:17.239479 6 log.go:172] (0xc00348a8f0) Data frame received for 5 I0512 10:31:17.239503 6 log.go:172] (0xc001080500) (5) Data frame handling I0512 10:31:17.239528 6 log.go:172] (0xc00348a8f0) Data frame received for 3 I0512 10:31:17.239540 6 log.go:172] (0xc002300aa0) (3) Data frame handling I0512 10:31:17.239562 6 log.go:172] (0xc002300aa0) (3) Data frame sent I0512 10:31:17.239575 6 log.go:172] (0xc00348a8f0) Data frame received for 3 I0512 10:31:17.239586 6 log.go:172] (0xc002300aa0) (3) Data frame handling I0512 10:31:17.240749 6 log.go:172] (0xc00348a8f0) Data frame received for 1 I0512 10:31:17.240773 6 log.go:172] (0xc0014cdc20) (1) Data frame handling I0512 10:31:17.240799 6 log.go:172] (0xc0014cdc20) (1) Data frame sent I0512 10:31:17.240820 6 log.go:172] (0xc00348a8f0) (0xc0014cdc20) Stream removed, broadcasting: 1 I0512 10:31:17.240862 6 log.go:172] (0xc00348a8f0) Go away received I0512 10:31:17.241339 6 log.go:172] (0xc00348a8f0) (0xc0014cdc20) Stream removed, broadcasting: 1 I0512 10:31:17.241369 6 log.go:172] (0xc00348a8f0) (0xc002300aa0) Stream removed, broadcasting: 3 I0512 10:31:17.241386 6 log.go:172] (0xc00348a8f0) (0xc001080500) Stream removed, broadcasting: 5 May 12 10:31:17.241: INFO: Exec stderr: "" May 12 10:31:17.241: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:17.241: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:17.270145 6 log.go:172] (0xc003031970) (0xc002300be0) Create stream I0512 10:31:17.270175 6 log.go:172] (0xc003031970) (0xc002300be0) Stream added, broadcasting: 1 I0512 10:31:17.278109 6 log.go:172] (0xc003031970) Reply frame received for 1 I0512 10:31:17.278157 6 log.go:172] (0xc003031970) (0xc002300c80) Create stream I0512 10:31:17.278171 6 log.go:172] (0xc003031970) (0xc002300c80) Stream added, broadcasting: 3 I0512 10:31:17.279006 6 log.go:172] (0xc003031970) Reply frame received for 3 I0512 10:31:17.279060 6 log.go:172] (0xc003031970) (0xc001080640) Create stream I0512 10:31:17.279071 6 log.go:172] (0xc003031970) (0xc001080640) Stream added, broadcasting: 5 I0512 10:31:17.280444 6 log.go:172] (0xc003031970) Reply frame received for 5 I0512 10:31:17.332566 6 log.go:172] (0xc003031970) Data frame received for 5 I0512 10:31:17.332604 6 log.go:172] (0xc001080640) (5) Data frame handling I0512 10:31:17.332646 6 log.go:172] (0xc003031970) Data frame received for 3 I0512 10:31:17.332666 6 log.go:172] (0xc002300c80) (3) Data frame handling I0512 10:31:17.332682 6 log.go:172] (0xc002300c80) (3) Data frame sent I0512 10:31:17.332698 6 log.go:172] (0xc003031970) Data frame received for 3 I0512 10:31:17.332721 6 log.go:172] (0xc002300c80) (3) Data frame handling I0512 10:31:17.334852 6 log.go:172] (0xc003031970) Data frame received for 1 I0512 10:31:17.334882 6 log.go:172] (0xc002300be0) (1) Data frame handling I0512 10:31:17.334894 6 log.go:172] (0xc002300be0) (1) Data frame sent I0512 10:31:17.334905 6 log.go:172] (0xc003031970) (0xc002300be0) Stream removed, broadcasting: 1 I0512 10:31:17.335000 6 log.go:172] (0xc003031970) (0xc002300be0) Stream removed, broadcasting: 1 I0512 10:31:17.335017 6 log.go:172] (0xc003031970) (0xc002300c80) Stream removed, broadcasting: 3 I0512 10:31:17.335151 6 log.go:172] (0xc003031970) (0xc001080640) Stream removed, broadcasting: 5 May 12 10:31:17.335: INFO: Exec stderr: "" May 12 10:31:17.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1696 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 10:31:17.335: INFO: >>> kubeConfig: /root/.kube/config I0512 10:31:17.337023 6 log.go:172] (0xc003031970) Go away received I0512 10:31:17.362045 6 log.go:172] (0xc002b0d3f0) (0xc001080dc0) Create stream I0512 10:31:17.362082 6 log.go:172] (0xc002b0d3f0) (0xc001080dc0) Stream added, broadcasting: 1 I0512 10:31:17.364226 6 log.go:172] (0xc002b0d3f0) Reply frame received for 1 I0512 10:31:17.364257 6 log.go:172] (0xc002b0d3f0) (0xc0014cdea0) Create stream I0512 10:31:17.364268 6 log.go:172] (0xc002b0d3f0) (0xc0014cdea0) Stream added, broadcasting: 3 I0512 10:31:17.365267 6 log.go:172] (0xc002b0d3f0) Reply frame received for 3 I0512 10:31:17.365303 6 log.go:172] (0xc002b0d3f0) (0xc001abc000) Create stream I0512 10:31:17.365317 6 log.go:172] (0xc002b0d3f0) (0xc001abc000) Stream added, broadcasting: 5 I0512 10:31:17.366414 6 log.go:172] (0xc002b0d3f0) Reply frame received for 5 I0512 10:31:17.486278 6 log.go:172] (0xc002b0d3f0) Data frame received for 3 I0512 10:31:17.486318 6 log.go:172] (0xc0014cdea0) (3) Data frame handling I0512 10:31:17.486346 6 log.go:172] (0xc0014cdea0) (3) Data frame sent I0512 10:31:17.486362 6 log.go:172] (0xc002b0d3f0) Data frame received for 3 I0512 10:31:17.486374 6 log.go:172] (0xc0014cdea0) (3) Data frame handling I0512 10:31:17.486408 6 log.go:172] (0xc002b0d3f0) Data frame received for 5 I0512 10:31:17.486445 6 log.go:172] (0xc001abc000) (5) Data frame handling I0512 10:31:17.487613 6 log.go:172] (0xc002b0d3f0) Data frame received for 1 I0512 10:31:17.487638 6 log.go:172] (0xc001080dc0) (1) Data frame handling I0512 10:31:17.487651 6 log.go:172] (0xc001080dc0) (1) Data frame sent I0512 10:31:17.487665 6 log.go:172] (0xc002b0d3f0) (0xc001080dc0) Stream removed, broadcasting: 1 I0512 10:31:17.487680 6 log.go:172] (0xc002b0d3f0) Go away received I0512 10:31:17.487803 6 log.go:172] (0xc002b0d3f0) (0xc001080dc0) Stream removed, broadcasting: 1 I0512 10:31:17.487820 6 log.go:172] (0xc002b0d3f0) (0xc0014cdea0) Stream removed, broadcasting: 3 I0512 10:31:17.487830 6 log.go:172] (0xc002b0d3f0) (0xc001abc000) Stream removed, broadcasting: 5 May 12 10:31:17.487: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:31:17.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1696" for this suite. May 12 10:32:15.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:32:16.526: INFO: namespace e2e-kubelet-etc-hosts-1696 deletion completed in 59.03492302s • [SLOW TEST:83.024 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:32:16.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:32:17.622: INFO: Create a RollingUpdate DaemonSet May 12 10:32:17.625: INFO: Check that daemon pods launch on every node of the cluster May 12 10:32:17.663: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:18.219: INFO: Number of nodes with available pods: 0 May 12 10:32:18.219: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:19.238: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:19.788: INFO: Number of nodes with available pods: 0 May 12 10:32:19.788: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:21.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:22.292: INFO: Number of nodes with available pods: 0 May 12 10:32:22.292: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:23.921: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:24.268: INFO: Number of nodes with available pods: 0 May 12 10:32:24.268: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:25.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:25.503: INFO: Number of nodes with available pods: 0 May 12 10:32:25.503: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:26.519: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:26.522: INFO: Number of nodes with available pods: 0 May 12 10:32:26.522: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:27.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:27.226: INFO: Number of nodes with available pods: 0 May 12 10:32:27.226: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:28.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:28.282: INFO: Number of nodes with available pods: 0 May 12 10:32:28.282: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:29.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:29.226: INFO: Number of nodes with available pods: 0 May 12 10:32:29.226: INFO: Node iruya-worker is running more than one daemon pod May 12 10:32:30.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:30.227: INFO: Number of nodes with available pods: 2 May 12 10:32:30.227: INFO: Number of running nodes: 2, number of available pods: 2 May 12 10:32:30.227: INFO: Update the DaemonSet to trigger a rollout May 12 10:32:30.233: INFO: Updating DaemonSet daemon-set May 12 10:32:42.267: INFO: Roll back the DaemonSet before rollout is complete May 12 10:32:42.273: INFO: Updating DaemonSet daemon-set May 12 10:32:42.273: INFO: Make sure DaemonSet rollback is complete May 12 10:32:42.678: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:42.678: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:42.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:43.974: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:43.974: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:43.987: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:44.912: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:44.912: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:44.916: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:46.207: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:46.207: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:46.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:46.961: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:46.961: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:46.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:48.453: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:48.453: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:48.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:48.876: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:48.876: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:48.879: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:50.143: INFO: Wrong image for pod: daemon-set-lnjnt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 12 10:32:50.143: INFO: Pod daemon-set-lnjnt is not available May 12 10:32:50.831: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:51.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:32:52.875: INFO: Pod daemon-set-f9sj9 is not available May 12 10:32:52.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7934, will wait for the garbage collector to delete the pods May 12 10:32:52.939: INFO: Deleting DaemonSet.extensions daemon-set took: 5.808704ms May 12 10:32:54.639: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.700199301s May 12 10:33:02.561: INFO: Number of nodes with available pods: 0 May 12 10:33:02.561: INFO: Number of running nodes: 0, number of available pods: 0 May 12 10:33:02.566: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7934/daemonsets","resourceVersion":"10455674"},"items":null} May 12 10:33:02.569: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7934/pods","resourceVersion":"10455674"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:33:02.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7934" for this suite. May 12 10:33:10.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:33:10.871: INFO: namespace daemonsets-7934 deletion completed in 8.290896034s • [SLOW TEST:54.345 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:33:10.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-509b72f6-bbf5-459f-ad5b-ac088119bad3 STEP: Creating a pod to test consume secrets May 12 10:33:10.958: INFO: Waiting up to 5m0s for pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b" in namespace "secrets-8424" to be "success or failure" May 12 10:33:10.969: INFO: Pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.05306ms May 12 10:33:12.973: INFO: Pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01508364s May 12 10:33:14.977: INFO: Pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018654204s May 12 10:33:16.980: INFO: Pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021531895s STEP: Saw pod success May 12 10:33:16.980: INFO: Pod "pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b" satisfied condition "success or failure" May 12 10:33:16.982: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b container secret-env-test: STEP: delete the pod May 12 10:33:17.374: INFO: Waiting for pod pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b to disappear May 12 10:33:17.543: INFO: Pod pod-secrets-ef98dc44-c5e1-4fb8-ba63-88977230a02b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:33:17.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8424" for this suite. May 12 10:33:25.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:33:25.642: INFO: namespace secrets-8424 deletion completed in 8.096434984s • [SLOW TEST:14.770 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:33:25.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:33:25.998: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 10:33:26.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:26.076: INFO: Number of nodes with available pods: 0 May 12 10:33:26.076: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:27.080: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:27.082: INFO: Number of nodes with available pods: 0 May 12 10:33:27.082: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:28.113: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:28.173: INFO: Number of nodes with available pods: 0 May 12 10:33:28.173: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:29.155: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:29.375: INFO: Number of nodes with available pods: 0 May 12 10:33:29.375: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:30.172: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:30.175: INFO: Number of nodes with available pods: 0 May 12 10:33:30.175: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:31.114: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:31.249: INFO: Number of nodes with available pods: 0 May 12 10:33:31.250: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:32.232: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:32.236: INFO: Number of nodes with available pods: 0 May 12 10:33:32.236: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:33.102: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:33.258: INFO: Number of nodes with available pods: 0 May 12 10:33:33.258: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:34.154: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:34.156: INFO: Number of nodes with available pods: 0 May 12 10:33:34.156: INFO: Node iruya-worker is running more than one daemon pod May 12 10:33:35.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:35.645: INFO: Number of nodes with available pods: 2 May 12 10:33:35.646: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 10:33:37.944: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:37.944: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:38.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:39.927: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:39.927: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:39.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:41.131: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:41.131: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:41.652: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:41.974: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:41.974: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:41.978: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:42.927: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:42.927: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:42.932: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:44.005: INFO: Wrong image for pod: daemon-set-kktd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:44.005: INFO: Pod daemon-set-kktd6 is not available May 12 10:33:44.005: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:44.009: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:45.186: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:45.186: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:45.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:46.368: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:46.368: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:46.796: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:47.070: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:47.070: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:47.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:47.934: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:47.934: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:48.258: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:49.532: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:49.532: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:49.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:50.527: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:50.527: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:50.530: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:51.669: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:51.669: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:51.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:52.299: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:52.299: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:52.302: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:53.214: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:53.214: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:53.216: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:53.968: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:53.968: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:53.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:55.185: INFO: Pod daemon-set-fvh75 is not available May 12 10:33:55.185: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:56.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:57.226: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:57.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:57.927: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:57.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:58.927: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:58.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:33:59.927: INFO: Wrong image for pod: daemon-set-mpwd6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 10:33:59.927: INFO: Pod daemon-set-mpwd6 is not available May 12 10:33:59.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:01.376: INFO: Pod daemon-set-f5f48 is not available May 12 10:34:01.380: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 10:34:01.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:01.867: INFO: Number of nodes with available pods: 1 May 12 10:34:01.867: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:02.873: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:02.877: INFO: Number of nodes with available pods: 1 May 12 10:34:02.877: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:04.157: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:04.161: INFO: Number of nodes with available pods: 1 May 12 10:34:04.161: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:04.916: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:04.919: INFO: Number of nodes with available pods: 1 May 12 10:34:04.919: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:05.869: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:05.871: INFO: Number of nodes with available pods: 1 May 12 10:34:05.871: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:06.986: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:06.989: INFO: Number of nodes with available pods: 1 May 12 10:34:06.989: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:07.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:07.874: INFO: Number of nodes with available pods: 1 May 12 10:34:07.874: INFO: Node iruya-worker2 is running more than one daemon pod May 12 10:34:08.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 10:34:08.949: INFO: Number of nodes with available pods: 2 May 12 10:34:08.949: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5796, will wait for the garbage collector to delete the pods May 12 10:34:09.016: INFO: Deleting DaemonSet.extensions daemon-set took: 6.214007ms May 12 10:34:09.517: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.515176ms May 12 10:34:24.466: INFO: Number of nodes with available pods: 0 May 12 10:34:24.466: INFO: Number of running nodes: 0, number of available pods: 0 May 12 10:34:24.468: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5796/daemonsets","resourceVersion":"10455954"},"items":null} May 12 10:34:24.470: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5796/pods","resourceVersion":"10455954"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:34:24.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5796" for this suite. May 12 10:34:46.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:34:47.202: INFO: namespace daemonsets-5796 deletion completed in 22.721786142s • [SLOW TEST:81.559 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:34:47.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 12 10:34:47.762: INFO: Waiting up to 5m0s for pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728" in namespace "containers-7898" to be "success or failure" May 12 10:34:48.190: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Pending", Reason="", readiness=false. Elapsed: 428.222713ms May 12 10:34:50.324: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562156333s May 12 10:34:52.329: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566930653s May 12 10:34:54.951: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Pending", Reason="", readiness=false. Elapsed: 7.188618093s May 12 10:34:56.954: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Pending", Reason="", readiness=false. Elapsed: 9.192127281s May 12 10:34:59.233: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.470745658s STEP: Saw pod success May 12 10:34:59.233: INFO: Pod "client-containers-23c3041b-d93f-4e40-8631-4a454885b728" satisfied condition "success or failure" May 12 10:34:59.765: INFO: Trying to get logs from node iruya-worker pod client-containers-23c3041b-d93f-4e40-8631-4a454885b728 container test-container: STEP: delete the pod May 12 10:35:01.314: INFO: Waiting for pod client-containers-23c3041b-d93f-4e40-8631-4a454885b728 to disappear May 12 10:35:01.478: INFO: Pod client-containers-23c3041b-d93f-4e40-8631-4a454885b728 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:35:01.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7898" for this suite. May 12 10:35:07.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:35:07.570: INFO: namespace containers-7898 deletion completed in 6.08780616s • [SLOW TEST:20.368 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:35:07.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:35:08.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6" in namespace "downward-api-1675" to be "success or failure" May 12 10:35:08.298: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.134394ms May 12 10:35:10.994: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.717606629s May 12 10:35:12.997: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720446829s May 12 10:35:15.000: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723401649s May 12 10:35:17.054: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.777929906s STEP: Saw pod success May 12 10:35:17.054: INFO: Pod "downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6" satisfied condition "success or failure" May 12 10:35:17.061: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6 container client-container: STEP: delete the pod May 12 10:35:17.571: INFO: Waiting for pod downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6 to disappear May 12 10:35:17.641: INFO: Pod downwardapi-volume-06b82530-efbb-4125-8274-77ec4277ccf6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:35:17.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1675" for this suite. May 12 10:35:26.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:35:26.990: INFO: namespace downward-api-1675 deletion completed in 9.34450085s • [SLOW TEST:19.420 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:35:26.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 10:35:44.893: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:35:45.150: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:35:47.150: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:35:47.348: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:35:49.150: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:35:49.327: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:35:51.150: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:35:51.455: INFO: Pod pod-with-poststart-http-hook still exists May 12 10:35:53.150: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 10:35:53.154: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:35:53.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7834" for this suite. May 12 10:36:19.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:19.244: INFO: namespace container-lifecycle-hook-7834 deletion completed in 26.085950753s • [SLOW TEST:52.254 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:36:19.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2972 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2972 to expose endpoints map[] May 12 10:36:19.416: INFO: Get endpoints failed (23.27823ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 12 10:36:20.419: INFO: successfully validated that service multi-endpoint-test in namespace services-2972 exposes endpoints map[] (1.026098618s elapsed) STEP: Creating pod pod1 in namespace services-2972 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2972 to expose endpoints map[pod1:[100]] May 12 10:36:25.715: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.29115543s elapsed, will retry) May 12 10:36:27.762: INFO: successfully validated that service multi-endpoint-test in namespace services-2972 exposes endpoints map[pod1:[100]] (7.337912756s elapsed) STEP: Creating pod pod2 in namespace services-2972 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2972 to expose endpoints map[pod1:[100] pod2:[101]] May 12 10:36:32.511: INFO: Unexpected endpoints: found map[b8f8981b-d05a-49bc-803c-2817bda72d31:[100]], expected map[pod1:[100] pod2:[101]] (4.746551673s elapsed, will retry) May 12 10:36:33.516: INFO: successfully validated that service multi-endpoint-test in namespace services-2972 exposes endpoints map[pod1:[100] pod2:[101]] (5.752139134s elapsed) STEP: Deleting pod pod1 in namespace services-2972 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2972 to expose endpoints map[pod2:[101]] May 12 10:36:35.236: INFO: successfully validated that service multi-endpoint-test in namespace services-2972 exposes endpoints map[pod2:[101]] (1.716842343s elapsed) STEP: Deleting pod pod2 in namespace services-2972 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2972 to expose endpoints map[] May 12 10:36:36.344: INFO: successfully validated that service multi-endpoint-test in namespace services-2972 exposes endpoints map[] (1.077000232s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:36:36.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2972" for this suite. May 12 10:36:46.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:47.216: INFO: namespace services-2972 deletion completed in 10.381226985s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:27.972 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:36:47.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-6c016e08-70f7-441e-a7a2-59446813a5e2 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:36:47.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6440" for this suite. May 12 10:36:53.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:36:53.739: INFO: namespace secrets-6440 deletion completed in 6.138730934s • [SLOW TEST:6.523 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:36:53.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7a449aad-dac8-4b9c-ad96-42372d16fe39 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:37:02.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7211" for this suite. May 12 10:37:24.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:37:25.045: INFO: namespace configmap-7211 deletion completed in 22.208913131s • [SLOW TEST:31.306 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:37:25.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:37:25.895: INFO: Creating deployment "nginx-deployment" May 12 10:37:26.084: INFO: Waiting for observed generation 1 May 12 10:37:28.911: INFO: Waiting for all required pods to come up May 12 10:37:29.878: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 10:37:48.355: INFO: Waiting for deployment "nginx-deployment" to complete May 12 10:37:48.372: INFO: Updating deployment "nginx-deployment" with a non-existent image May 12 10:37:48.377: INFO: Updating deployment nginx-deployment May 12 10:37:48.377: INFO: Waiting for observed generation 2 May 12 10:37:50.685: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 10:37:50.689: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 10:37:51.037: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 10:37:51.369: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 10:37:51.369: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 10:37:51.371: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 10:37:51.374: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 12 10:37:51.374: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 12 10:37:51.379: INFO: Updating deployment nginx-deployment May 12 10:37:51.379: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 12 10:37:51.866: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 10:37:52.031: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 10:37:53.291: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-183,SelfLink:/apis/apps/v1/namespaces/deployment-183/deployments/nginx-deployment,UID:4c654e7c-62af-4e71-bfd0-0ffb9a137467,ResourceVersion:10456746,Generation:3,CreationTimestamp:2020-05-12 10:37:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-12 10:37:48 +0000 UTC 2020-05-12 10:37:26 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-12 10:37:52 +0000 UTC 2020-05-12 10:37:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 12 10:37:53.336: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-183,SelfLink:/apis/apps/v1/namespaces/deployment-183/replicasets/nginx-deployment-55fb7cb77f,UID:f889302a-eccc-4d28-a0fc-a5fd04cd650a,ResourceVersion:10456717,Generation:3,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4c654e7c-62af-4e71-bfd0-0ffb9a137467 0xc003455e87 0xc003455e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:37:53.336: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 12 10:37:53.336: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-183,SelfLink:/apis/apps/v1/namespaces/deployment-183/replicasets/nginx-deployment-7b8c6f4498,UID:beab4f96-3610-48fe-993a-51a52270c6b9,ResourceVersion:10456769,Generation:3,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4c654e7c-62af-4e71-bfd0-0ffb9a137467 0xc003455f57 0xc003455f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 12 10:37:53.708: INFO: Pod "nginx-deployment-55fb7cb77f-8v86k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8v86k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-8v86k,UID:3b7314ed-c221-4932-a64f-b7e92d342000,ResourceVersion:10456753,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8117 0xc002dd8118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd81b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.708: INFO: Pod "nginx-deployment-55fb7cb77f-bfrbc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bfrbc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-bfrbc,UID:f79cec46-be04-4ae3-9aee-31b79e0fff7f,ResourceVersion:10456773,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8247 0xc002dd8248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd82c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd82e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.708: INFO: Pod "nginx-deployment-55fb7cb77f-k8dkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k8dkd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-k8dkd,UID:b70cb075-8cd3-40df-a3b8-ac5dc02d6de3,ResourceVersion:10456707,Generation:0,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8367 0xc002dd8368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd83e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 10:37:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.708: INFO: Pod "nginx-deployment-55fb7cb77f-kpq6n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kpq6n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-kpq6n,UID:4eb197ef-1f4e-4441-952f-37bd68fd334b,ResourceVersion:10456772,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd84d7 0xc002dd84d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8560} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-lqbb4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lqbb4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-lqbb4,UID:9e1a94b8-39de-4973-a2cd-835f666e4acd,ResourceVersion:10456771,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8607 0xc002dd8608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8680} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd86a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-p82lv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p82lv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-p82lv,UID:039a4983-f21b-4d6b-be81-22f46d6d1ad4,ResourceVersion:10456678,Generation:0,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8727 0xc002dd8728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd87a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd87c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 10:37:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-pcrls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pcrls,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-pcrls,UID:6eeae71e-2cc7-4d58-8d8d-4892ee89b8a6,ResourceVersion:10456750,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8897 0xc002dd8898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8910} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-pxl4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pxl4n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-pxl4n,UID:7a3adcf4-5e3d-4fb6-9ea2-863ad6005447,ResourceVersion:10456705,Generation:0,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd89b7 0xc002dd89b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 10:37:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-qhx7j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qhx7j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-qhx7j,UID:85a5b096-141c-4ff8-beff-efb4d08b87f4,ResourceVersion:10456690,Generation:0,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8b27 0xc002dd8b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 10:37:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.709: INFO: Pod "nginx-deployment-55fb7cb77f-vrhhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vrhhc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-vrhhc,UID:21c3ef10-5933-4a8c-b913-66c972964fa2,ResourceVersion:10456777,Generation:0,CreationTimestamp:2020-05-12 10:37:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8c97 0xc002dd8c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-55fb7cb77f-vz5pg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vz5pg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-vz5pg,UID:02e12d70-9f63-4492-9539-8f024849c3d9,ResourceVersion:10456774,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8db7 0xc002dd8db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-55fb7cb77f-w82ks" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w82ks,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-w82ks,UID:559eac2e-6df0-4391-8362-b3c7b1d01e38,ResourceVersion:10456681,Generation:0,CreationTimestamp:2020-05-12 10:37:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd8ed7 0xc002dd8ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd8f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd8f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 10:37:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-55fb7cb77f-xb2zx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xb2zx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-55fb7cb77f-xb2zx,UID:0dc365f2-3ab1-41fa-a1c3-da82cde6213b,ResourceVersion:10456744,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f889302a-eccc-4d28-a0fc-a5fd04cd650a 0xc002dd9047 0xc002dd9048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd90c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd90e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-7b8c6f4498-5lklv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5lklv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-5lklv,UID:01167398-8240-4fac-a6eb-d528b32cc61d,ResourceVersion:10456754,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9167 0xc002dd9168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd91e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-7b8c6f4498-69zpv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-69zpv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-69zpv,UID:a0095c0b-005b-4969-b041-80fff07d22d6,ResourceVersion:10456626,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9287 0xc002dd9288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.126,StartTime:2020-05-12 10:37:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1fe9eb1fcc7708cb312df18401b71dc68a1fcca66f1336f55704a2b0bf191d03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-7b8c6f4498-6dp8c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6dp8c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-6dp8c,UID:b1f5a1c1-51a3-4e6b-8419-64207149e23f,ResourceVersion:10456766,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9417 0xc002dd9418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9490} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd94b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 10:37:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-7b8c6f4498-6npbl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6npbl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-6npbl,UID:d42871f7-d958-44db-9d46-81bd4e752b4c,ResourceVersion:10456726,Generation:0,CreationTimestamp:2020-05-12 10:37:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9577 0xc002dd9578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd95f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.710: INFO: Pod "nginx-deployment-7b8c6f4498-7mrvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7mrvn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-7mrvn,UID:8cc3e330-e57c-4c4c-9167-32fb8e893f8f,ResourceVersion:10456724,Generation:0,CreationTimestamp:2020-05-12 10:37:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9697 0xc002dd9698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-7w46x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7w46x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-7w46x,UID:7b68c560-7ec4-4688-8650-cf3521a052de,ResourceVersion:10456747,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd97b7 0xc002dd97b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-8ljgp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8ljgp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-8ljgp,UID:f159faf5-abd5-4486-9a21-6d5cb136f82f,ResourceVersion:10456757,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd98d7 0xc002dd98d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-8shzf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8shzf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-8shzf,UID:b0a0428c-117c-464e-be34-74cbc7fb2dad,ResourceVersion:10456752,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd99f7 0xc002dd99f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-b2s6w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b2s6w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-b2s6w,UID:65f71b07-e6ff-49ee-8177-19e40050d6bf,ResourceVersion:10456638,Generation:0,CreationTimestamp:2020-05-12 10:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9b17 0xc002dd9b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.129,StartTime:2020-05-12 10:37:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ca32d8545a5bb93e8eaed52cfeaf9f06f483e28635b703dab0c90db050d07c54}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-b4gnx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b4gnx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-b4gnx,UID:c7b589f1-26e3-41c7-9879-ea6f9a3f4e2b,ResourceVersion:10456651,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9c87 0xc002dd9c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.121,StartTime:2020-05-12 10:37:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://63c09511d5f5823a499dbe81899ca011e8e442c02b8870539068f4ecede37642}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-bvsfj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bvsfj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-bvsfj,UID:bed532b2-0544-4e0f-9d59-c83b15da59e6,ResourceVersion:10456608,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9df7 0xc002dd9df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dd9e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.125,StartTime:2020-05-12 10:37:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e0d0394a217ca58071c0a4aa8c9942d9c6e74b31f2dfe6101d6b9527e3fbfd8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-ftj7z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ftj7z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-ftj7z,UID:031eae1d-f3d8-49a6-ab0d-6c56099d4436,ResourceVersion:10456648,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002dd9f67 0xc002dd9f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dd9fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a40c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.120,StartTime:2020-05-12 10:37:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c5ca1d53c55e6e8106ab0c87f519a1ab6459b997d38dd73bcb4f2ff36a0f3904}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.711: INFO: Pod "nginx-deployment-7b8c6f4498-fwgh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fwgh8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-fwgh8,UID:59ec1d83-fb06-43ae-801b-23a67834f157,ResourceVersion:10456741,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc0009a42f7 0xc0009a42f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a4420} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a4440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-lblw8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lblw8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-lblw8,UID:47679fdc-ffa7-4d81-ad8a-51c6cdd015bc,ResourceVersion:10456635,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc0009a45e7 0xc0009a45e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a46c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a46e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.128,StartTime:2020-05-12 10:37:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3baad0949affb66e297871675d18463120c2e2d8855b9de788d59ff7bd56c989}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-lvq9g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lvq9g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-lvq9g,UID:c067842f-9875-42f8-84c6-d1c98a99da97,ResourceVersion:10456601,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc0009a47e7 0xc0009a47e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a4880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a48d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.119,StartTime:2020-05-12 10:37:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://16e786f0738ca955b0b594651f019ff38b15a9f482e3db1962ac62380fa81a7d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-nsvcm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nsvcm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-nsvcm,UID:2d029feb-34bf-4dca-8088-842130e56ce4,ResourceVersion:10456755,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc0009a49a7 0xc0009a49a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a4a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a5d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-txm6g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-txm6g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-txm6g,UID:588fa44a-d800-4ec5-a212-609346f46d0d,ResourceVersion:10456742,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002ee8047 0xc002ee8048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ee80c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ee80e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-w4nwp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w4nwp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-w4nwp,UID:6f867317-1005-4bbc-b64d-831af9774b34,ResourceVersion:10456621,Generation:0,CreationTimestamp:2020-05-12 10:37:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002ee8167 0xc002ee8168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ee81e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ee8200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.127,StartTime:2020-05-12 10:37:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 10:37:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ad7cb12e07fa906d0143e25ac0938cf51271e83b7a1e4dc4d80b9226355fd458}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-ww864" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ww864,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-ww864,UID:be4e4b57-6334-4d84-90d1-8337818cdfad,ResourceVersion:10456775,Generation:0,CreationTimestamp:2020-05-12 10:37:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002ee82d7 0xc002ee82d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ee8350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ee8370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-12 10:37:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 10:37:53.712: INFO: Pod "nginx-deployment-7b8c6f4498-zrtt9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zrtt9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-183,SelfLink:/api/v1/namespaces/deployment-183/pods/nginx-deployment-7b8c6f4498-zrtt9,UID:474a5a97-66c3-4bc3-9e92-5d080676cf2a,ResourceVersion:10456756,Generation:0,CreationTimestamp:2020-05-12 10:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 beab4f96-3610-48fe-993a-51a52270c6b9 0xc002ee8437 0xc002ee8438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-knx9p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-knx9p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-knx9p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ee84b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ee84d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:37:52 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:37:53.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-183" for this suite. May 12 10:38:32.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:38:33.017: INFO: namespace deployment-183 deletion completed in 38.645908874s • [SLOW TEST:67.972 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:38:33.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 10:38:39.828: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:38:40.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5588" for this suite. May 12 10:38:48.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:38:48.817: INFO: namespace container-runtime-5588 deletion completed in 8.26561595s • [SLOW TEST:15.800 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:38:48.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 12 10:38:49.489: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix442432663/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:38:49.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3573" for this suite. May 12 10:38:57.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:38:57.956: INFO: namespace kubectl-3573 deletion completed in 8.204854908s • [SLOW TEST:9.138 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:38:57.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9928.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9928.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9928.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9928.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 85.200.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.200.85_udp@PTR;check="$$(dig +tcp +noall +answer +search 85.200.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.200.85_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9928.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9928.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9928.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9928.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9928.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9928.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 85.200.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.200.85_udp@PTR;check="$$(dig +tcp +noall +answer +search 85.200.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.200.85_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:39:10.791: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.794: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.797: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.800: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.844: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.848: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.850: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:10.862: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:15.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.869: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.923: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.925: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:15.939: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:20.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.870: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.872: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.889: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.894: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.895: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:20.909: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:25.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.870: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.872: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.875: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.890: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.892: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:25.941: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:30.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:30.991: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:30.995: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:30.997: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:31.016: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:31.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:31.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:31.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:31.548: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:35.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.868: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.887: INFO: Unable to read jessie_udp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.889: INFO: Unable to read jessie_tcp@dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.891: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.893: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local from pod dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977: the server could not find the requested resource (get pods dns-test-383c4c92-6683-43ef-85d0-b45416767977) May 12 10:39:35.906: INFO: Lookups using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 failed for: [wheezy_udp@dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@dns-test-service.dns-9928.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_udp@dns-test-service.dns-9928.svc.cluster.local jessie_tcp@dns-test-service.dns-9928.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9928.svc.cluster.local] May 12 10:39:40.953: INFO: DNS probes using dns-9928/dns-test-383c4c92-6683-43ef-85d0-b45416767977 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:39:42.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9928" for this suite. May 12 10:39:52.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:39:52.460: INFO: namespace dns-9928 deletion completed in 10.158182953s • [SLOW TEST:54.505 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:39:52.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 12 10:39:52.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5775' May 12 10:40:01.033: INFO: stderr: "" May 12 10:40:01.033: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 10:40:01.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5775' May 12 10:40:01.286: INFO: stderr: "" May 12 10:40:01.286: INFO: stdout: "update-demo-nautilus-dtrfm update-demo-nautilus-wgsfq " May 12 10:40:01.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtrfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:01.416: INFO: stderr: "" May 12 10:40:01.416: INFO: stdout: "" May 12 10:40:01.416: INFO: update-demo-nautilus-dtrfm is created but not running May 12 10:40:06.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5775' May 12 10:40:06.575: INFO: stderr: "" May 12 10:40:06.575: INFO: stdout: "update-demo-nautilus-dtrfm update-demo-nautilus-wgsfq " May 12 10:40:06.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtrfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:06.720: INFO: stderr: "" May 12 10:40:06.721: INFO: stdout: "" May 12 10:40:06.721: INFO: update-demo-nautilus-dtrfm is created but not running May 12 10:40:11.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5775' May 12 10:40:11.819: INFO: stderr: "" May 12 10:40:11.819: INFO: stdout: "update-demo-nautilus-dtrfm update-demo-nautilus-wgsfq " May 12 10:40:11.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtrfm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:11.911: INFO: stderr: "" May 12 10:40:11.911: INFO: stdout: "true" May 12 10:40:11.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtrfm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:12.011: INFO: stderr: "" May 12 10:40:12.011: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:40:12.011: INFO: validating pod update-demo-nautilus-dtrfm May 12 10:40:12.015: INFO: got data: { "image": "nautilus.jpg" } May 12 10:40:12.015: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:40:12.015: INFO: update-demo-nautilus-dtrfm is verified up and running May 12 10:40:12.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgsfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:12.102: INFO: stderr: "" May 12 10:40:12.102: INFO: stdout: "true" May 12 10:40:12.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgsfq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5775' May 12 10:40:12.181: INFO: stderr: "" May 12 10:40:12.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 10:40:12.181: INFO: validating pod update-demo-nautilus-wgsfq May 12 10:40:12.184: INFO: got data: { "image": "nautilus.jpg" } May 12 10:40:12.184: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 10:40:12.184: INFO: update-demo-nautilus-wgsfq is verified up and running STEP: using delete to clean up resources May 12 10:40:12.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5775' May 12 10:40:12.395: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 10:40:12.395: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 10:40:12.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5775' May 12 10:40:13.106: INFO: stderr: "No resources found.\n" May 12 10:40:13.106: INFO: stdout: "" May 12 10:40:13.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5775 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 10:40:13.483: INFO: stderr: "" May 12 10:40:13.483: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:40:13.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5775" for this suite. May 12 10:40:24.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:40:24.166: INFO: namespace kubectl-5775 deletion completed in 10.402241375s • [SLOW TEST:31.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:40:24.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 12 10:40:25.395: INFO: created pod pod-service-account-defaultsa May 12 10:40:25.395: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 10:40:25.483: INFO: created pod pod-service-account-mountsa May 12 10:40:25.483: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 10:40:25.603: INFO: created pod pod-service-account-nomountsa May 12 10:40:25.603: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 10:40:25.652: INFO: created pod pod-service-account-defaultsa-mountspec May 12 10:40:25.652: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 10:40:25.902: INFO: created pod pod-service-account-mountsa-mountspec May 12 10:40:25.903: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 10:40:26.159: INFO: created pod pod-service-account-nomountsa-mountspec May 12 10:40:26.159: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 10:40:26.350: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 10:40:26.350: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 10:40:26.422: INFO: created pod pod-service-account-mountsa-nomountspec May 12 10:40:26.422: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 10:40:26.449: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 10:40:26.449: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:40:26.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-654" for this suite. May 12 10:41:13.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:41:13.424: INFO: namespace svcaccounts-654 deletion completed in 46.718168326s • [SLOW TEST:49.258 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:41:13.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:41:14.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677" in namespace "downward-api-8746" to be "success or failure" May 12 10:41:14.603: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Pending", Reason="", readiness=false. Elapsed: 533.082638ms May 12 10:41:16.784: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714202542s May 12 10:41:18.992: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.922018322s May 12 10:41:21.143: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Pending", Reason="", readiness=false. Elapsed: 7.07300441s May 12 10:41:23.562: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Pending", Reason="", readiness=false. Elapsed: 9.492635869s May 12 10:41:25.566: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.496404772s STEP: Saw pod success May 12 10:41:25.566: INFO: Pod "downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677" satisfied condition "success or failure" May 12 10:41:25.568: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677 container client-container: STEP: delete the pod May 12 10:41:26.055: INFO: Waiting for pod downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677 to disappear May 12 10:41:26.444: INFO: Pod downwardapi-volume-39fcddc3-a47c-4c89-91ac-2e3091bf3677 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:41:26.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8746" for this suite. May 12 10:41:34.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:41:34.577: INFO: namespace downward-api-8746 deletion completed in 8.128556849s • [SLOW TEST:21.153 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:41:34.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:41:47.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6319" for this suite. May 12 10:41:57.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:41:57.934: INFO: namespace kubelet-test-6319 deletion completed in 10.07452622s • [SLOW TEST:23.356 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:41:57.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-569519a1-ddaa-4b89-bc06-ed8d316123f7 STEP: Creating a pod to test consume secrets May 12 10:41:58.029: INFO: Waiting up to 5m0s for pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811" in namespace "secrets-8786" to be "success or failure" May 12 10:41:58.053: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811": Phase="Pending", Reason="", readiness=false. Elapsed: 23.491023ms May 12 10:42:00.123: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094084884s May 12 10:42:02.127: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098002155s May 12 10:42:04.167: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137806051s May 12 10:42:06.170: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.140887875s STEP: Saw pod success May 12 10:42:06.170: INFO: Pod "pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811" satisfied condition "success or failure" May 12 10:42:06.172: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811 container secret-volume-test: STEP: delete the pod May 12 10:42:06.816: INFO: Waiting for pod pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811 to disappear May 12 10:42:06.888: INFO: Pod pod-secrets-c5cfa74e-5b98-47fa-8103-8bcc2d5b1811 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:42:06.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8786" for this suite. May 12 10:42:13.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:42:13.324: INFO: namespace secrets-8786 deletion completed in 6.433899368s • [SLOW TEST:15.391 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:42:13.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:42:14.166: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e" in namespace "projected-8729" to be "success or failure" May 12 10:42:14.224: INFO: Pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e": Phase="Pending", Reason="", readiness=false. Elapsed: 58.101048ms May 12 10:42:16.245: INFO: Pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079332839s May 12 10:42:18.401: INFO: Pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235574422s May 12 10:42:20.404: INFO: Pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238720356s STEP: Saw pod success May 12 10:42:20.404: INFO: Pod "downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e" satisfied condition "success or failure" May 12 10:42:20.407: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e container client-container: STEP: delete the pod May 12 10:42:20.620: INFO: Waiting for pod downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e to disappear May 12 10:42:20.844: INFO: Pod downwardapi-volume-7f158256-6c3b-4ccc-a9e7-6dbac507059e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:42:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8729" for this suite. May 12 10:42:27.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:42:27.150: INFO: namespace projected-8729 deletion completed in 6.302704693s • [SLOW TEST:13.826 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:42:27.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-8nfp STEP: Creating a pod to test atomic-volume-subpath May 12 10:42:27.436: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8nfp" in namespace "subpath-2341" to be "success or failure" May 12 10:42:27.438: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Pending", Reason="", readiness=false. Elapsed: 1.94239ms May 12 10:42:29.653: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216248172s May 12 10:42:31.671: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234339295s May 12 10:42:33.682: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.246096744s May 12 10:42:35.686: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.24922912s May 12 10:42:37.689: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.252423977s May 12 10:42:39.988: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.551195749s May 12 10:42:41.991: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.554476871s May 12 10:42:44.054: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.617196667s May 12 10:42:46.077: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.640931874s May 12 10:42:48.082: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.646132594s May 12 10:42:50.087: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.650208657s May 12 10:42:52.091: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 24.654324492s May 12 10:42:54.108: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Running", Reason="", readiness=true. Elapsed: 26.671316985s May 12 10:42:56.111: INFO: Pod "pod-subpath-test-secret-8nfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.674624383s STEP: Saw pod success May 12 10:42:56.111: INFO: Pod "pod-subpath-test-secret-8nfp" satisfied condition "success or failure" May 12 10:42:56.113: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-8nfp container test-container-subpath-secret-8nfp: STEP: delete the pod May 12 10:42:56.296: INFO: Waiting for pod pod-subpath-test-secret-8nfp to disappear May 12 10:42:56.383: INFO: Pod pod-subpath-test-secret-8nfp no longer exists STEP: Deleting pod pod-subpath-test-secret-8nfp May 12 10:42:56.383: INFO: Deleting pod "pod-subpath-test-secret-8nfp" in namespace "subpath-2341" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:42:56.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2341" for this suite. May 12 10:43:02.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:43:02.481: INFO: namespace subpath-2341 deletion completed in 6.092925805s • [SLOW TEST:35.330 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:43:02.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 10:43:02.673: INFO: Waiting up to 5m0s for pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67" in namespace "downward-api-6350" to be "success or failure" May 12 10:43:02.695: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67": Phase="Pending", Reason="", readiness=false. Elapsed: 21.67086ms May 12 10:43:04.785: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112394193s May 12 10:43:06.789: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116339241s May 12 10:43:08.793: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67": Phase="Running", Reason="", readiness=true. Elapsed: 6.119964787s May 12 10:43:10.796: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123201692s STEP: Saw pod success May 12 10:43:10.796: INFO: Pod "downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67" satisfied condition "success or failure" May 12 10:43:10.799: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67 container dapi-container: STEP: delete the pod May 12 10:43:10.824: INFO: Waiting for pod downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67 to disappear May 12 10:43:10.990: INFO: Pod downward-api-ba83f796-6e1f-434c-bc88-0abcedce2f67 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:43:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6350" for this suite. May 12 10:43:19.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:43:19.180: INFO: namespace downward-api-6350 deletion completed in 8.186908581s • [SLOW TEST:16.699 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:43:19.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 10:43:28.654: INFO: Successfully updated pod "pod-update-94ced025-545e-41f7-90fb-a10de6fb921f" STEP: verifying the updated pod is in kubernetes May 12 10:43:29.061: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:43:29.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-583" for this suite. May 12 10:43:51.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:43:51.468: INFO: namespace pods-583 deletion completed in 22.209368995s • [SLOW TEST:32.288 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:43:51.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6604 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 12 10:43:52.142: INFO: Found 0 stateful pods, waiting for 3 May 12 10:44:02.219: INFO: Found 2 stateful pods, waiting for 3 May 12 10:44:12.147: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 10:44:12.147: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 10:44:12.147: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 10:44:12.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6604 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:44:12.775: INFO: stderr: "I0512 10:44:12.479276 1068 log.go:172] (0xc0006fea50) (0xc0002e6aa0) Create stream\nI0512 10:44:12.479315 1068 log.go:172] (0xc0006fea50) (0xc0002e6aa0) Stream added, broadcasting: 1\nI0512 10:44:12.481003 1068 log.go:172] (0xc0006fea50) Reply frame received for 1\nI0512 10:44:12.481023 1068 log.go:172] (0xc0006fea50) (0xc0009bc000) Create stream\nI0512 10:44:12.481030 1068 log.go:172] (0xc0006fea50) (0xc0009bc000) Stream added, broadcasting: 3\nI0512 10:44:12.481857 1068 log.go:172] (0xc0006fea50) Reply frame received for 3\nI0512 10:44:12.481896 1068 log.go:172] (0xc0006fea50) (0xc0008b0000) Create stream\nI0512 10:44:12.481910 1068 log.go:172] (0xc0006fea50) (0xc0008b0000) Stream added, broadcasting: 5\nI0512 10:44:12.482768 1068 log.go:172] (0xc0006fea50) Reply frame received for 5\nI0512 10:44:12.558271 1068 log.go:172] (0xc0006fea50) Data frame received for 5\nI0512 10:44:12.558288 1068 log.go:172] (0xc0008b0000) (5) Data frame handling\nI0512 10:44:12.558302 1068 log.go:172] (0xc0008b0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:44:12.768666 1068 log.go:172] (0xc0006fea50) Data frame received for 3\nI0512 10:44:12.768751 1068 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0512 10:44:12.768810 1068 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0512 10:44:12.768824 1068 log.go:172] (0xc0006fea50) Data frame received for 3\nI0512 10:44:12.768829 1068 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0512 10:44:12.768868 1068 log.go:172] (0xc0006fea50) Data frame received for 5\nI0512 10:44:12.768886 1068 log.go:172] (0xc0008b0000) (5) Data frame handling\nI0512 10:44:12.771138 1068 log.go:172] (0xc0006fea50) Data frame received for 1\nI0512 10:44:12.771190 1068 log.go:172] (0xc0002e6aa0) (1) Data frame handling\nI0512 10:44:12.771217 1068 log.go:172] (0xc0002e6aa0) (1) Data frame sent\nI0512 10:44:12.771249 1068 log.go:172] (0xc0006fea50) (0xc0002e6aa0) Stream removed, broadcasting: 1\nI0512 10:44:12.771280 1068 log.go:172] (0xc0006fea50) Go away received\nI0512 10:44:12.772145 1068 log.go:172] (0xc0006fea50) (0xc0002e6aa0) Stream removed, broadcasting: 1\nI0512 10:44:12.772160 1068 log.go:172] (0xc0006fea50) (0xc0009bc000) Stream removed, broadcasting: 3\nI0512 10:44:12.772168 1068 log.go:172] (0xc0006fea50) (0xc0008b0000) Stream removed, broadcasting: 5\n" May 12 10:44:12.776: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:44:12.776: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 10:44:22.801: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 10:44:33.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6604 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:44:33.716: INFO: stderr: "I0512 10:44:33.615516 1090 log.go:172] (0xc0009242c0) (0xc000302820) Create stream\nI0512 10:44:33.615565 1090 log.go:172] (0xc0009242c0) (0xc000302820) Stream added, broadcasting: 1\nI0512 10:44:33.617946 1090 log.go:172] (0xc0009242c0) Reply frame received for 1\nI0512 10:44:33.617981 1090 log.go:172] (0xc0009242c0) (0xc0008c8000) Create stream\nI0512 10:44:33.617993 1090 log.go:172] (0xc0009242c0) (0xc0008c8000) Stream added, broadcasting: 3\nI0512 10:44:33.618657 1090 log.go:172] (0xc0009242c0) Reply frame received for 3\nI0512 10:44:33.618681 1090 log.go:172] (0xc0009242c0) (0xc00033c000) Create stream\nI0512 10:44:33.618690 1090 log.go:172] (0xc0009242c0) (0xc00033c000) Stream added, broadcasting: 5\nI0512 10:44:33.619203 1090 log.go:172] (0xc0009242c0) Reply frame received for 5\nI0512 10:44:33.710754 1090 log.go:172] (0xc0009242c0) Data frame received for 3\nI0512 10:44:33.710785 1090 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0512 10:44:33.710792 1090 log.go:172] (0xc0008c8000) (3) Data frame sent\nI0512 10:44:33.710797 1090 log.go:172] (0xc0009242c0) Data frame received for 3\nI0512 10:44:33.710803 1090 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0512 10:44:33.710821 1090 log.go:172] (0xc0009242c0) Data frame received for 5\nI0512 10:44:33.710825 1090 log.go:172] (0xc00033c000) (5) Data frame handling\nI0512 10:44:33.710830 1090 log.go:172] (0xc00033c000) (5) Data frame sent\nI0512 10:44:33.710835 1090 log.go:172] (0xc0009242c0) Data frame received for 5\nI0512 10:44:33.710839 1090 log.go:172] (0xc00033c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:44:33.711875 1090 log.go:172] (0xc0009242c0) Data frame received for 1\nI0512 10:44:33.711907 1090 log.go:172] (0xc000302820) (1) Data frame handling\nI0512 10:44:33.711923 1090 log.go:172] (0xc000302820) (1) Data frame sent\nI0512 10:44:33.711936 1090 log.go:172] (0xc0009242c0) (0xc000302820) Stream removed, broadcasting: 1\nI0512 10:44:33.711948 1090 log.go:172] (0xc0009242c0) Go away received\nI0512 10:44:33.712264 1090 log.go:172] (0xc0009242c0) (0xc000302820) Stream removed, broadcasting: 1\nI0512 10:44:33.712278 1090 log.go:172] (0xc0009242c0) (0xc0008c8000) Stream removed, broadcasting: 3\nI0512 10:44:33.712285 1090 log.go:172] (0xc0009242c0) (0xc00033c000) Stream removed, broadcasting: 5\n" May 12 10:44:33.717: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:44:33.717: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:44:43.736: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update May 12 10:44:43.736: INFO: Waiting for Pod statefulset-6604/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:44:43.736: INFO: Waiting for Pod statefulset-6604/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:44:53.760: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update May 12 10:44:53.760: INFO: Waiting for Pod statefulset-6604/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:44:53.760: INFO: Waiting for Pod statefulset-6604/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:45:03.766: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update May 12 10:45:03.766: INFO: Waiting for Pod statefulset-6604/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 10:45:13.742: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update STEP: Rolling back to a previous revision May 12 10:45:23.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6604 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 10:45:24.092: INFO: stderr: "I0512 10:45:23.948937 1106 log.go:172] (0xc000116840) (0xc0006b0960) Create stream\nI0512 10:45:23.948992 1106 log.go:172] (0xc000116840) (0xc0006b0960) Stream added, broadcasting: 1\nI0512 10:45:23.950972 1106 log.go:172] (0xc000116840) Reply frame received for 1\nI0512 10:45:23.951023 1106 log.go:172] (0xc000116840) (0xc0002e0140) Create stream\nI0512 10:45:23.951045 1106 log.go:172] (0xc000116840) (0xc0002e0140) Stream added, broadcasting: 3\nI0512 10:45:23.952662 1106 log.go:172] (0xc000116840) Reply frame received for 3\nI0512 10:45:23.952688 1106 log.go:172] (0xc000116840) (0xc000a68000) Create stream\nI0512 10:45:23.952697 1106 log.go:172] (0xc000116840) (0xc000a68000) Stream added, broadcasting: 5\nI0512 10:45:23.954247 1106 log.go:172] (0xc000116840) Reply frame received for 5\nI0512 10:45:24.029863 1106 log.go:172] (0xc000116840) Data frame received for 5\nI0512 10:45:24.029886 1106 log.go:172] (0xc000a68000) (5) Data frame handling\nI0512 10:45:24.029900 1106 log.go:172] (0xc000a68000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 10:45:24.086400 1106 log.go:172] (0xc000116840) Data frame received for 5\nI0512 10:45:24.086429 1106 log.go:172] (0xc000a68000) (5) Data frame handling\nI0512 10:45:24.086453 1106 log.go:172] (0xc000116840) Data frame received for 3\nI0512 10:45:24.086474 1106 log.go:172] (0xc0002e0140) (3) Data frame handling\nI0512 10:45:24.086490 1106 log.go:172] (0xc0002e0140) (3) Data frame sent\nI0512 10:45:24.086500 1106 log.go:172] (0xc000116840) Data frame received for 3\nI0512 10:45:24.086507 1106 log.go:172] (0xc0002e0140) (3) Data frame handling\nI0512 10:45:24.087423 1106 log.go:172] (0xc000116840) Data frame received for 1\nI0512 10:45:24.087439 1106 log.go:172] (0xc0006b0960) (1) Data frame handling\nI0512 10:45:24.087448 1106 log.go:172] (0xc0006b0960) (1) Data frame sent\nI0512 10:45:24.087458 1106 log.go:172] (0xc000116840) (0xc0006b0960) Stream removed, broadcasting: 1\nI0512 10:45:24.087469 1106 log.go:172] (0xc000116840) Go away received\nI0512 10:45:24.087765 1106 log.go:172] (0xc000116840) (0xc0006b0960) Stream removed, broadcasting: 1\nI0512 10:45:24.087783 1106 log.go:172] (0xc000116840) (0xc0002e0140) Stream removed, broadcasting: 3\nI0512 10:45:24.087803 1106 log.go:172] (0xc000116840) (0xc000a68000) Stream removed, broadcasting: 5\n" May 12 10:45:24.092: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 10:45:24.092: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 10:45:34.120: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 10:45:44.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6604 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 10:45:44.522: INFO: stderr: "I0512 10:45:44.445522 1125 log.go:172] (0xc000994420) (0xc0008cc5a0) Create stream\nI0512 10:45:44.445570 1125 log.go:172] (0xc000994420) (0xc0008cc5a0) Stream added, broadcasting: 1\nI0512 10:45:44.447926 1125 log.go:172] (0xc000994420) Reply frame received for 1\nI0512 10:45:44.447959 1125 log.go:172] (0xc000994420) (0xc00090c000) Create stream\nI0512 10:45:44.447969 1125 log.go:172] (0xc000994420) (0xc00090c000) Stream added, broadcasting: 3\nI0512 10:45:44.449006 1125 log.go:172] (0xc000994420) Reply frame received for 3\nI0512 10:45:44.449042 1125 log.go:172] (0xc000994420) (0xc0008cc6e0) Create stream\nI0512 10:45:44.449052 1125 log.go:172] (0xc000994420) (0xc0008cc6e0) Stream added, broadcasting: 5\nI0512 10:45:44.450010 1125 log.go:172] (0xc000994420) Reply frame received for 5\nI0512 10:45:44.515680 1125 log.go:172] (0xc000994420) Data frame received for 5\nI0512 10:45:44.515727 1125 log.go:172] (0xc0008cc6e0) (5) Data frame handling\nI0512 10:45:44.515741 1125 log.go:172] (0xc0008cc6e0) (5) Data frame sent\nI0512 10:45:44.515750 1125 log.go:172] (0xc000994420) Data frame received for 5\nI0512 10:45:44.515759 1125 log.go:172] (0xc0008cc6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 10:45:44.515797 1125 log.go:172] (0xc000994420) Data frame received for 3\nI0512 10:45:44.515813 1125 log.go:172] (0xc00090c000) (3) Data frame handling\nI0512 10:45:44.515832 1125 log.go:172] (0xc00090c000) (3) Data frame sent\nI0512 10:45:44.515845 1125 log.go:172] (0xc000994420) Data frame received for 3\nI0512 10:45:44.515857 1125 log.go:172] (0xc00090c000) (3) Data frame handling\nI0512 10:45:44.517070 1125 log.go:172] (0xc000994420) Data frame received for 1\nI0512 10:45:44.517091 1125 log.go:172] (0xc0008cc5a0) (1) Data frame handling\nI0512 10:45:44.517108 1125 log.go:172] (0xc0008cc5a0) (1) Data frame sent\nI0512 10:45:44.517303 1125 log.go:172] (0xc000994420) (0xc0008cc5a0) Stream removed, broadcasting: 1\nI0512 10:45:44.517655 1125 log.go:172] (0xc000994420) Go away received\nI0512 10:45:44.517735 1125 log.go:172] (0xc000994420) (0xc0008cc5a0) Stream removed, broadcasting: 1\nI0512 10:45:44.517763 1125 log.go:172] (0xc000994420) (0xc00090c000) Stream removed, broadcasting: 3\nI0512 10:45:44.517777 1125 log.go:172] (0xc000994420) (0xc0008cc6e0) Stream removed, broadcasting: 5\n" May 12 10:45:44.522: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 10:45:44.522: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 10:45:54.536: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update May 12 10:45:54.536: INFO: Waiting for Pod statefulset-6604/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:45:54.536: INFO: Waiting for Pod statefulset-6604/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:46:04.679: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update May 12 10:46:04.679: INFO: Waiting for Pod statefulset-6604/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 10:46:14.572: INFO: Waiting for StatefulSet statefulset-6604/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 10:46:25.270: INFO: Deleting all statefulset in ns statefulset-6604 May 12 10:46:25.273: INFO: Scaling statefulset ss2 to 0 May 12 10:46:55.438: INFO: Waiting for statefulset status.replicas updated to 0 May 12 10:46:55.442: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:46:55.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6604" for this suite. May 12 10:47:03.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:47:03.579: INFO: namespace statefulset-6604 deletion completed in 8.097631578s • [SLOW TEST:192.110 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:47:03.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 10:47:25.036: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:25.383: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:27.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:27.466: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:29.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:29.387: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:31.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:31.388: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:33.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:33.386: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:35.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:35.725: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:37.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:37.386: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:39.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:39.387: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:41.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:41.387: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:43.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:43.576: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:45.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:45.386: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:47.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:47.386: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:49.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:49.386: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:51.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:51.387: INFO: Pod pod-with-poststart-exec-hook still exists May 12 10:47:53.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 10:47:53.496: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:47:53.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5060" for this suite. May 12 10:48:21.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:48:21.773: INFO: namespace container-lifecycle-hook-5060 deletion completed in 28.273951714s • [SLOW TEST:78.193 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:48:21.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4733603b-5351-46a0-b5b4-3519d2d91213 STEP: Creating a pod to test consume secrets May 12 10:48:24.988: INFO: Waiting up to 5m0s for pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e" in namespace "secrets-6036" to be "success or failure" May 12 10:48:24.991: INFO: Pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108598ms May 12 10:48:27.048: INFO: Pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060028709s May 12 10:48:29.132: INFO: Pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143774066s May 12 10:48:31.135: INFO: Pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146967523s STEP: Saw pod success May 12 10:48:31.135: INFO: Pod "pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e" satisfied condition "success or failure" May 12 10:48:31.138: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e container secret-volume-test: STEP: delete the pod May 12 10:48:31.158: INFO: Waiting for pod pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e to disappear May 12 10:48:31.163: INFO: Pod pod-secrets-f094b5dd-37cd-43a2-9683-be00c11a375e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:48:31.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6036" for this suite. May 12 10:48:37.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:48:37.255: INFO: namespace secrets-6036 deletion completed in 6.089284774s STEP: Destroying namespace "secret-namespace-807" for this suite. May 12 10:48:43.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:48:43.601: INFO: namespace secret-namespace-807 deletion completed in 6.345936066s • [SLOW TEST:21.828 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:48:43.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-82177d7a-d341-4928-be7b-259e703981f0 STEP: Creating a pod to test consume configMaps May 12 10:48:44.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7" in namespace "configmap-782" to be "success or failure" May 12 10:48:44.072: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Pending", Reason="", readiness=false. Elapsed: 62.24521ms May 12 10:48:46.074: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064760961s May 12 10:48:48.078: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068989594s May 12 10:48:50.390: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380684606s May 12 10:48:52.420: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Running", Reason="", readiness=true. Elapsed: 8.41036037s May 12 10:48:54.431: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.42190878s STEP: Saw pod success May 12 10:48:54.431: INFO: Pod "pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7" satisfied condition "success or failure" May 12 10:48:54.434: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7 container configmap-volume-test: STEP: delete the pod May 12 10:48:54.848: INFO: Waiting for pod pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7 to disappear May 12 10:48:54.913: INFO: Pod pod-configmaps-86bfe9bf-d9a6-4d53-b7bf-3fede1e775f7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:48:54.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-782" for this suite. May 12 10:49:01.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:49:01.217: INFO: namespace configmap-782 deletion completed in 6.297445395s • [SLOW TEST:17.615 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:49:01.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:49:01.452: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:49:06.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3331" for this suite. May 12 10:49:58.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:49:58.140: INFO: namespace pods-3331 deletion completed in 52.113210151s • [SLOW TEST:56.923 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:49:58.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 12 10:49:58.831: INFO: Waiting up to 5m0s for pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711" in namespace "var-expansion-5276" to be "success or failure" May 12 10:49:58.904: INFO: Pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711": Phase="Pending", Reason="", readiness=false. Elapsed: 72.779777ms May 12 10:50:00.943: INFO: Pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112009382s May 12 10:50:02.947: INFO: Pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115854776s May 12 10:50:05.259: INFO: Pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428402642s STEP: Saw pod success May 12 10:50:05.259: INFO: Pod "var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711" satisfied condition "success or failure" May 12 10:50:05.263: INFO: Trying to get logs from node iruya-worker pod var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711 container dapi-container: STEP: delete the pod May 12 10:50:05.682: INFO: Waiting for pod var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711 to disappear May 12 10:50:05.872: INFO: Pod var-expansion-18b30717-4cbc-4ca5-80f2-524358d5c711 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:50:05.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5276" for this suite. May 12 10:50:11.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:12.060: INFO: namespace var-expansion-5276 deletion completed in 6.183823191s • [SLOW TEST:13.919 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:50:12.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:50:12.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c" in namespace "downward-api-1075" to be "success or failure" May 12 10:50:12.238: INFO: Pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46246ms May 12 10:50:14.362: INFO: Pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12760283s May 12 10:50:16.365: INFO: Pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130860664s May 12 10:50:18.369: INFO: Pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134811155s STEP: Saw pod success May 12 10:50:18.369: INFO: Pod "downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c" satisfied condition "success or failure" May 12 10:50:18.371: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c container client-container: STEP: delete the pod May 12 10:50:18.518: INFO: Waiting for pod downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c to disappear May 12 10:50:18.576: INFO: Pod downwardapi-volume-c013837b-e76a-4142-a6a7-b62091219f7c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:50:18.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1075" for this suite. May 12 10:50:24.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:24.789: INFO: namespace downward-api-1075 deletion completed in 6.210055983s • [SLOW TEST:12.729 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:50:24.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 10:50:25.064: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 10:50:25.087: INFO: Waiting for terminating namespaces to be deleted... May 12 10:50:25.090: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 10:50:25.095: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 10:50:25.095: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:50:25.095: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 10:50:25.095: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:50:25.095: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 10:50:25.100: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 10:50:25.100: INFO: Container kindnet-cni ready: true, restart count 0 May 12 10:50:25.100: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 10:50:25.100: INFO: Container kube-proxy ready: true, restart count 0 May 12 10:50:25.100: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 10:50:25.100: INFO: Container coredns ready: true, restart count 0 May 12 10:50:25.100: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 10:50:25.100: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e424ee4613891], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:50:26.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2576" for this suite. May 12 10:50:32.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:50:32.405: INFO: namespace sched-pred-2576 deletion completed in 6.283308115s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.615 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:50:32.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 10:50:37.403: INFO: Successfully updated pod "labelsupdatec0b8e2bb-758e-40b4-a9ad-a0fa909be8e1" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:50:39.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8209" for this suite. May 12 10:51:01.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:51:01.780: INFO: namespace downward-api-8209 deletion completed in 22.311110451s • [SLOW TEST:29.374 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:51:01.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b11e2d43-4d37-4059-9ad1-bfa95dab205c in namespace container-probe-8439 May 12 10:51:09.896: INFO: Started pod test-webserver-b11e2d43-4d37-4059-9ad1-bfa95dab205c in namespace container-probe-8439 STEP: checking the pod's current state and verifying that restartCount is present May 12 10:51:09.900: INFO: Initial restart count of pod test-webserver-b11e2d43-4d37-4059-9ad1-bfa95dab205c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:55:10.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8439" for this suite. May 12 10:55:16.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:55:16.980: INFO: namespace container-probe-8439 deletion completed in 6.193394254s • [SLOW TEST:255.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:55:16.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:55:27.175: INFO: DNS probes using dns-test-49ad64d0-30b8-4501-a274-fe171e429221 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:55:39.749: INFO: File wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:39.752: INFO: File jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:39.752: INFO: Lookups using dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 failed for: [wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local] May 12 10:55:44.755: INFO: File wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:44.759: INFO: File jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:44.759: INFO: Lookups using dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 failed for: [wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local] May 12 10:55:49.758: INFO: File wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:49.761: INFO: File jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:49.761: INFO: Lookups using dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 failed for: [wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local] May 12 10:55:54.757: INFO: File wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local from pod dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 10:55:54.759: INFO: Lookups using dns-4112/dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 failed for: [wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local] May 12 10:55:59.761: INFO: DNS probes using dns-test-1ae9cd70-a913-4e55-a8cf-1d73b89ad7e6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4112.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4112.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 10:56:13.354: INFO: DNS probes using dns-test-acfc1902-f2a7-40e7-9749-6b52fd0ff3af succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:56:13.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4112" for this suite. May 12 10:56:26.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:26.237: INFO: namespace dns-4112 deletion completed in 12.322326189s • [SLOW TEST:69.257 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:56:26.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d25b9d58-f3d4-43e4-bcce-b597d227cc66 STEP: Creating a pod to test consume configMaps May 12 10:56:27.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260" in namespace "projected-5420" to be "success or failure" May 12 10:56:27.595: INFO: Pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260": Phase="Pending", Reason="", readiness=false. Elapsed: 482.949037ms May 12 10:56:29.599: INFO: Pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486416191s May 12 10:56:31.602: INFO: Pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489769441s May 12 10:56:33.936: INFO: Pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.823968079s STEP: Saw pod success May 12 10:56:33.936: INFO: Pod "pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260" satisfied condition "success or failure" May 12 10:56:33.940: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260 container projected-configmap-volume-test: STEP: delete the pod May 12 10:56:34.177: INFO: Waiting for pod pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260 to disappear May 12 10:56:34.449: INFO: Pod pod-projected-configmaps-edf02c1a-b5f9-42ca-afac-54e4039cf260 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:56:34.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5420" for this suite. May 12 10:56:40.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:40.576: INFO: namespace projected-5420 deletion completed in 6.122198527s • [SLOW TEST:14.339 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:56:40.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:56:40.986: INFO: Creating deployment "test-recreate-deployment" May 12 10:56:41.014: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 10:56:41.080: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 12 10:56:43.308: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 10:56:43.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:56:45.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724877801, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 10:56:47.463: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 10:56:47.469: INFO: Updating deployment test-recreate-deployment May 12 10:56:47.469: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 10:56:49.665: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7063,SelfLink:/apis/apps/v1/namespaces/deployment-7063/deployments/test-recreate-deployment,UID:0873e032-bb5c-439f-87cb-5e25b05d7505,ResourceVersion:10460329,Generation:2,CreationTimestamp:2020-05-12 10:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-12 10:56:48 +0000 UTC 2020-05-12 10:56:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-12 10:56:49 +0000 UTC 2020-05-12 10:56:41 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 12 10:56:49.675: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7063,SelfLink:/apis/apps/v1/namespaces/deployment-7063/replicasets/test-recreate-deployment-5c8c9cc69d,UID:a77f9919-bba6-40e7-b0e0-e799155cb273,ResourceVersion:10460328,Generation:1,CreationTimestamp:2020-05-12 10:56:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0873e032-bb5c-439f-87cb-5e25b05d7505 0xc002dd95a7 0xc002dd95a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:56:49.675: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 10:56:49.675: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7063,SelfLink:/apis/apps/v1/namespaces/deployment-7063/replicasets/test-recreate-deployment-6df85df6b9,UID:cafd1f90-450f-44bd-a475-edb1ac7648f2,ResourceVersion:10460316,Generation:2,CreationTimestamp:2020-05-12 10:56:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0873e032-bb5c-439f-87cb-5e25b05d7505 0xc002dd9677 0xc002dd9678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 10:56:49.676: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lvpwm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lvpwm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7063,SelfLink:/api/v1/namespaces/deployment-7063/pods/test-recreate-deployment-5c8c9cc69d-lvpwm,UID:ed3fce87-4526-49ca-bb4d-f30126d012be,ResourceVersion:10460330,Generation:0,CreationTimestamp:2020-05-12 10:56:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d a77f9919-bba6-40e7-b0e0-e799155cb273 0xc00265e857 0xc00265e858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sz4gp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sz4gp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sz4gp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00265e8d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00265e8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:56:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:56:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:56:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:56:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-12 10:56:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:56:49.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7063" for this suite. May 12 10:56:55.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:56:55.742: INFO: namespace deployment-7063 deletion completed in 6.063209668s • [SLOW TEST:15.166 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:56:55.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 12 10:57:03.317: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2854 pod-service-account-c4909b2e-5998-4f97-84fd-2aa7976cbc9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 12 10:57:13.689: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2854 pod-service-account-c4909b2e-5998-4f97-84fd-2aa7976cbc9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 12 10:57:13.874: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2854 pod-service-account-c4909b2e-5998-4f97-84fd-2aa7976cbc9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:57:14.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2854" for this suite. May 12 10:57:20.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:20.524: INFO: namespace svcaccounts-2854 deletion completed in 6.481150727s • [SLOW TEST:24.782 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:57:20.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-473c172f-377c-4bdc-8b71-b3eab3a6bf3e STEP: Creating a pod to test consume secrets May 12 10:57:21.239: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72" in namespace "projected-2418" to be "success or failure" May 12 10:57:21.251: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72": Phase="Pending", Reason="", readiness=false. Elapsed: 12.66682ms May 12 10:57:23.255: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016305837s May 12 10:57:25.259: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020425625s May 12 10:57:27.608: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368965742s May 12 10:57:29.611: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.372361807s STEP: Saw pod success May 12 10:57:29.611: INFO: Pod "pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72" satisfied condition "success or failure" May 12 10:57:29.613: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72 container projected-secret-volume-test: STEP: delete the pod May 12 10:57:30.015: INFO: Waiting for pod pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72 to disappear May 12 10:57:30.093: INFO: Pod pod-projected-secrets-4f4522ab-2fcb-4bef-b1dc-bac959b44c72 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:57:30.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2418" for this suite. May 12 10:57:36.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:36.441: INFO: namespace projected-2418 deletion completed in 6.344558169s • [SLOW TEST:15.916 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:57:36.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 10:57:36.548: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d" in namespace "projected-7526" to be "success or failure" May 12 10:57:36.556: INFO: Pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098704ms May 12 10:57:38.793: INFO: Pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245531779s May 12 10:57:40.817: INFO: Pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269522203s May 12 10:57:42.821: INFO: Pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27279985s STEP: Saw pod success May 12 10:57:42.821: INFO: Pod "downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d" satisfied condition "success or failure" May 12 10:57:42.823: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d container client-container: STEP: delete the pod May 12 10:57:42.911: INFO: Waiting for pod downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d to disappear May 12 10:57:42.924: INFO: Pod downwardapi-volume-31dbf0d7-20a9-433e-908c-b01aa9f11d4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:57:42.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7526" for this suite. May 12 10:57:48.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:49.010: INFO: namespace projected-7526 deletion completed in 6.082570355s • [SLOW TEST:12.568 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:57:49.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 10:57:49.083: INFO: Waiting up to 5m0s for pod "downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424" in namespace "downward-api-3671" to be "success or failure" May 12 10:57:49.104: INFO: Pod "downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424": Phase="Pending", Reason="", readiness=false. Elapsed: 21.23116ms May 12 10:57:51.189: INFO: Pod "downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105842492s May 12 10:57:53.194: INFO: Pod "downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110748944s STEP: Saw pod success May 12 10:57:53.194: INFO: Pod "downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424" satisfied condition "success or failure" May 12 10:57:53.197: INFO: Trying to get logs from node iruya-worker pod downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424 container dapi-container: STEP: delete the pod May 12 10:57:53.507: INFO: Waiting for pod downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424 to disappear May 12 10:57:53.512: INFO: Pod downward-api-dccd401c-79d4-4f62-9665-0bbe04fea424 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:57:53.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3671" for this suite. May 12 10:57:59.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:57:59.604: INFO: namespace downward-api-3671 deletion completed in 6.089101196s • [SLOW TEST:10.594 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:57:59.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 10:58:00.758: INFO: Waiting up to 5m0s for pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d" in namespace "emptydir-392" to be "success or failure" May 12 10:58:00.961: INFO: Pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 202.218497ms May 12 10:58:03.165: INFO: Pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406881858s May 12 10:58:05.169: INFO: Pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410264019s May 12 10:58:07.172: INFO: Pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413351888s STEP: Saw pod success May 12 10:58:07.172: INFO: Pod "pod-d7794e15-da33-4746-afd2-010b3f88bf4d" satisfied condition "success or failure" May 12 10:58:07.174: INFO: Trying to get logs from node iruya-worker pod pod-d7794e15-da33-4746-afd2-010b3f88bf4d container test-container: STEP: delete the pod May 12 10:58:07.226: INFO: Waiting for pod pod-d7794e15-da33-4746-afd2-010b3f88bf4d to disappear May 12 10:58:07.239: INFO: Pod pod-d7794e15-da33-4746-afd2-010b3f88bf4d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:58:07.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-392" for this suite. May 12 10:58:13.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:58:13.535: INFO: namespace emptydir-392 deletion completed in 6.293728508s • [SLOW TEST:13.930 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:58:13.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 10:58:18.640: INFO: Waiting up to 5m0s for pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a" in namespace "pods-9539" to be "success or failure" May 12 10:58:18.677: INFO: Pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.158602ms May 12 10:58:21.243: INFO: Pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603131284s May 12 10:58:23.392: INFO: Pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a": Phase="Running", Reason="", readiness=true. Elapsed: 4.752370893s May 12 10:58:25.397: INFO: Pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.756906456s STEP: Saw pod success May 12 10:58:25.397: INFO: Pod "client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a" satisfied condition "success or failure" May 12 10:58:25.400: INFO: Trying to get logs from node iruya-worker pod client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a container env3cont: STEP: delete the pod May 12 10:58:25.775: INFO: Waiting for pod client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a to disappear May 12 10:58:25.867: INFO: Pod client-envvars-e12d0a6f-ff66-4ca5-afda-de5d082cd34a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:58:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9539" for this suite. May 12 10:59:11.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:11.974: INFO: namespace pods-9539 deletion completed in 46.103755762s • [SLOW TEST:58.439 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:59:11.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-1f64e66b-1de2-43ee-91e0-1f5186335039 STEP: Creating secret with name secret-projected-all-test-volume-0c0c7c0c-aece-428e-9de8-ab436aedef2f STEP: Creating a pod to test Check all projections for projected volume plugin May 12 10:59:12.336: INFO: Waiting up to 5m0s for pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c" in namespace "projected-1563" to be "success or failure" May 12 10:59:12.359: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.970455ms May 12 10:59:14.362: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026647337s May 12 10:59:16.556: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220319321s May 12 10:59:18.560: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224484602s May 12 10:59:20.687: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.351746745s STEP: Saw pod success May 12 10:59:20.687: INFO: Pod "projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c" satisfied condition "success or failure" May 12 10:59:20.690: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c container projected-all-volume-test: STEP: delete the pod May 12 10:59:20.899: INFO: Waiting for pod projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c to disappear May 12 10:59:20.945: INFO: Pod projected-volume-b8bfecbc-2561-4332-ac79-372a66b0326c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:59:20.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1563" for this suite. May 12 10:59:29.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 10:59:29.844: INFO: namespace projected-1563 deletion completed in 8.89460151s • [SLOW TEST:17.869 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 10:59:29.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 10:59:38.591: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ed0dad4f-166f-42a4-90ea-4a386a5bdc5f,GenerateName:,Namespace:events-3645,SelfLink:/api/v1/namespaces/events-3645/pods/send-events-ed0dad4f-166f-42a4-90ea-4a386a5bdc5f,UID:70ac7704-5995-4bdf-b9a0-57bea72ea35d,ResourceVersion:10460886,Generation:0,CreationTimestamp:2020-05-12 10:59:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 105432975,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r44d9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r44d9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-r44d9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00233fc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00233fc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:59:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:59:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:59:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 10:59:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.162,StartTime:2020-05-12 10:59:30 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-12 10:59:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f974e83add410c26ffe7593df7ea8329b89a3e8561d56d535d437131bffe7cb8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 12 10:59:40.748: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 10:59:42.752: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 10:59:42.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3645" for this suite. May 12 11:00:23.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:00:23.276: INFO: namespace events-3645 deletion completed in 40.186003106s • [SLOW TEST:53.431 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:00:23.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 11:00:25.293: INFO: Pod name wrapped-volume-race-12d11756-d178-40cf-a723-4b510bb96786: Found 0 pods out of 5 May 12 11:00:31.028: INFO: Pod name wrapped-volume-race-12d11756-d178-40cf-a723-4b510bb96786: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-12d11756-d178-40cf-a723-4b510bb96786 in namespace emptydir-wrapper-6642, will wait for the garbage collector to delete the pods May 12 11:00:49.204: INFO: Deleting ReplicationController wrapped-volume-race-12d11756-d178-40cf-a723-4b510bb96786 took: 8.417189ms May 12 11:00:49.504: INFO: Terminating ReplicationController wrapped-volume-race-12d11756-d178-40cf-a723-4b510bb96786 pods took: 300.212003ms STEP: Creating RC which spawns configmap-volume pods May 12 11:01:32.377: INFO: Pod name wrapped-volume-race-7a2155ff-ce1c-4f57-9557-f50cd6006ef3: Found 0 pods out of 5 May 12 11:01:37.387: INFO: Pod name wrapped-volume-race-7a2155ff-ce1c-4f57-9557-f50cd6006ef3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7a2155ff-ce1c-4f57-9557-f50cd6006ef3 in namespace emptydir-wrapper-6642, will wait for the garbage collector to delete the pods May 12 11:01:54.936: INFO: Deleting ReplicationController wrapped-volume-race-7a2155ff-ce1c-4f57-9557-f50cd6006ef3 took: 74.905985ms May 12 11:01:55.336: INFO: Terminating ReplicationController wrapped-volume-race-7a2155ff-ce1c-4f57-9557-f50cd6006ef3 pods took: 400.26481ms STEP: Creating RC which spawns configmap-volume pods May 12 11:02:42.555: INFO: Pod name wrapped-volume-race-5841b938-269e-4597-ae2b-d9965589978a: Found 0 pods out of 5 May 12 11:02:47.561: INFO: Pod name wrapped-volume-race-5841b938-269e-4597-ae2b-d9965589978a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5841b938-269e-4597-ae2b-d9965589978a in namespace emptydir-wrapper-6642, will wait for the garbage collector to delete the pods May 12 11:03:05.804: INFO: Deleting ReplicationController wrapped-volume-race-5841b938-269e-4597-ae2b-d9965589978a took: 152.672777ms May 12 11:03:06.304: INFO: Terminating ReplicationController wrapped-volume-race-5841b938-269e-4597-ae2b-d9965589978a pods took: 500.210423ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:03:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6642" for this suite. May 12 11:04:07.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:04:07.170: INFO: namespace emptydir-wrapper-6642 deletion completed in 12.083576082s • [SLOW TEST:223.894 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:04:07.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-46dcf476-9783-4c3e-9c4f-6ead2afcfb44 STEP: Creating a pod to test consume configMaps May 12 11:04:07.535: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615" in namespace "projected-4787" to be "success or failure" May 12 11:04:07.566: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 30.941257ms May 12 11:04:09.570: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034854244s May 12 11:04:12.136: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6009711s May 12 11:04:14.140: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604687368s May 12 11:04:16.145: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609900797s May 12 11:04:18.858: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Pending", Reason="", readiness=false. Elapsed: 11.323286869s May 12 11:04:20.955: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Running", Reason="", readiness=true. Elapsed: 13.419942438s May 12 11:04:22.959: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.423752217s STEP: Saw pod success May 12 11:04:22.959: INFO: Pod "pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615" satisfied condition "success or failure" May 12 11:04:22.962: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615 container projected-configmap-volume-test: STEP: delete the pod May 12 11:04:23.442: INFO: Waiting for pod pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615 to disappear May 12 11:04:23.464: INFO: Pod pod-projected-configmaps-79712f5f-fb84-4fc3-8e6f-58a4abb0e615 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:04:23.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4787" for this suite. May 12 11:04:31.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:04:31.539: INFO: namespace projected-4787 deletion completed in 8.072180844s • [SLOW TEST:24.370 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:04:31.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:04:31.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4093' May 12 11:04:32.216: INFO: stderr: "" May 12 11:04:32.216: INFO: stdout: "replicationcontroller/redis-master created\n" May 12 11:04:32.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4093' May 12 11:04:33.579: INFO: stderr: "" May 12 11:04:33.579: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 12 11:04:34.872: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:34.872: INFO: Found 0 / 1 May 12 11:04:35.596: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:35.596: INFO: Found 0 / 1 May 12 11:04:36.980: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:36.980: INFO: Found 0 / 1 May 12 11:04:37.741: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:37.741: INFO: Found 0 / 1 May 12 11:04:38.656: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:38.656: INFO: Found 0 / 1 May 12 11:04:39.686: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:39.687: INFO: Found 1 / 1 May 12 11:04:39.687: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 11:04:39.690: INFO: Selector matched 1 pods for map[app:redis] May 12 11:04:39.690: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 11:04:39.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-bljx5 --namespace=kubectl-4093' May 12 11:04:39.926: INFO: stderr: "" May 12 11:04:39.926: INFO: stdout: "Name: redis-master-bljx5\nNamespace: kubectl-4093\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 12 May 2020 11:04:32 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.178\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://c614ab7c22d91b4c1c20774a49529381533fd2c2241a9d139f30922fbf403955\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 11:04:38 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-j78kx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-j78kx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-j78kx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned kubectl-4093/redis-master-bljx5 to iruya-worker\n Normal Pulled 4s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" May 12 11:04:39.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4093' May 12 11:04:40.050: INFO: stderr: "" May 12 11:04:40.050: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4093\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-bljx5\n" May 12 11:04:40.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4093' May 12 11:04:40.747: INFO: stderr: "" May 12 11:04:40.747: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4093\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.192.240\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.178:6379\nSession Affinity: None\nEvents: \n" May 12 11:04:40.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 12 11:04:40.895: INFO: stderr: "" May 12 11:04:40.895: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 11:03:51 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 11:03:51 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 11:03:51 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 11:03:51 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 12 11:04:40.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4093' May 12 11:04:41.006: INFO: stderr: "" May 12 11:04:41.006: INFO: stdout: "Name: kubectl-4093\nLabels: e2e-framework=kubectl\n e2e-run=bfa7519f-1832-4aff-9b5c-2e82adb5f460\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:04:41.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4093" for this suite. May 12 11:05:05.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:05:05.818: INFO: namespace kubectl-4093 deletion completed in 24.809712011s • [SLOW TEST:34.279 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:05:05.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ed508535-6e8d-4b64-908a-a331801f4ed7 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:05:06.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3129" for this suite. May 12 11:05:12.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:05:12.527: INFO: namespace configmap-3129 deletion completed in 6.307045065s • [SLOW TEST:6.708 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:05:12.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d93212bc-b7ed-4e33-b53b-9f1d0c56a3e4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d93212bc-b7ed-4e33-b53b-9f1d0c56a3e4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:06:41.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5213" for this suite. May 12 11:07:03.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:07:03.407: INFO: namespace projected-5213 deletion completed in 22.257382276s • [SLOW TEST:110.880 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:07:03.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 11:07:04.447: INFO: Waiting up to 5m0s for pod "pod-4221bd79-fa00-441f-8157-c1814c43960b" in namespace "emptydir-6055" to be "success or failure" May 12 11:07:04.725: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b": Phase="Pending", Reason="", readiness=false. Elapsed: 277.598553ms May 12 11:07:06.727: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279993877s May 12 11:07:08.730: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282750807s May 12 11:07:10.734: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b": Phase="Running", Reason="", readiness=true. Elapsed: 6.286955414s May 12 11:07:12.738: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.290755584s STEP: Saw pod success May 12 11:07:12.738: INFO: Pod "pod-4221bd79-fa00-441f-8157-c1814c43960b" satisfied condition "success or failure" May 12 11:07:12.740: INFO: Trying to get logs from node iruya-worker pod pod-4221bd79-fa00-441f-8157-c1814c43960b container test-container: STEP: delete the pod May 12 11:07:13.039: INFO: Waiting for pod pod-4221bd79-fa00-441f-8157-c1814c43960b to disappear May 12 11:07:13.281: INFO: Pod pod-4221bd79-fa00-441f-8157-c1814c43960b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:07:13.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6055" for this suite. May 12 11:07:21.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:07:21.918: INFO: namespace emptydir-6055 deletion completed in 8.633064777s • [SLOW TEST:18.511 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:07:21.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 12 11:07:23.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5972' May 12 11:07:30.312: INFO: stderr: "" May 12 11:07:30.312: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:07:30.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:30.550: INFO: stderr: "" May 12 11:07:30.550: INFO: stdout: "update-demo-nautilus-hbnpn update-demo-nautilus-v9rvx " May 12 11:07:30.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:30.647: INFO: stderr: "" May 12 11:07:30.647: INFO: stdout: "" May 12 11:07:30.647: INFO: update-demo-nautilus-hbnpn is created but not running May 12 11:07:35.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:36.470: INFO: stderr: "" May 12 11:07:36.470: INFO: stdout: "update-demo-nautilus-hbnpn update-demo-nautilus-v9rvx " May 12 11:07:36.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:37.439: INFO: stderr: "" May 12 11:07:37.439: INFO: stdout: "" May 12 11:07:37.439: INFO: update-demo-nautilus-hbnpn is created but not running May 12 11:07:42.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:42.611: INFO: stderr: "" May 12 11:07:42.611: INFO: stdout: "update-demo-nautilus-hbnpn update-demo-nautilus-v9rvx " May 12 11:07:42.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:42.957: INFO: stderr: "" May 12 11:07:42.957: INFO: stdout: "true" May 12 11:07:42.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:43.515: INFO: stderr: "" May 12 11:07:43.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:07:43.515: INFO: validating pod update-demo-nautilus-hbnpn May 12 11:07:43.535: INFO: got data: { "image": "nautilus.jpg" } May 12 11:07:43.535: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:07:43.535: INFO: update-demo-nautilus-hbnpn is verified up and running May 12 11:07:43.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9rvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:43.611: INFO: stderr: "" May 12 11:07:43.611: INFO: stdout: "true" May 12 11:07:43.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9rvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:43.820: INFO: stderr: "" May 12 11:07:43.820: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:07:43.820: INFO: validating pod update-demo-nautilus-v9rvx May 12 11:07:43.824: INFO: got data: { "image": "nautilus.jpg" } May 12 11:07:43.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:07:43.824: INFO: update-demo-nautilus-v9rvx is verified up and running STEP: scaling down the replication controller May 12 11:07:43.827: INFO: scanned /root for discovery docs: May 12 11:07:43.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5972' May 12 11:07:45.629: INFO: stderr: "" May 12 11:07:45.629: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:07:45.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:46.083: INFO: stderr: "" May 12 11:07:46.083: INFO: stdout: "update-demo-nautilus-hbnpn update-demo-nautilus-v9rvx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 11:07:51.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:51.190: INFO: stderr: "" May 12 11:07:51.190: INFO: stdout: "update-demo-nautilus-hbnpn update-demo-nautilus-v9rvx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 11:07:56.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:56.340: INFO: stderr: "" May 12 11:07:56.340: INFO: stdout: "update-demo-nautilus-hbnpn " May 12 11:07:56.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:56.432: INFO: stderr: "" May 12 11:07:56.432: INFO: stdout: "true" May 12 11:07:56.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:56.524: INFO: stderr: "" May 12 11:07:56.524: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:07:56.524: INFO: validating pod update-demo-nautilus-hbnpn May 12 11:07:56.666: INFO: got data: { "image": "nautilus.jpg" } May 12 11:07:56.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:07:56.666: INFO: update-demo-nautilus-hbnpn is verified up and running STEP: scaling up the replication controller May 12 11:07:56.667: INFO: scanned /root for discovery docs: May 12 11:07:56.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5972' May 12 11:07:58.047: INFO: stderr: "" May 12 11:07:58.047: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:07:58.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:07:58.152: INFO: stderr: "" May 12 11:07:58.152: INFO: stdout: "update-demo-nautilus-dx7mx update-demo-nautilus-hbnpn " May 12 11:07:58.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx7mx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:07:58.455: INFO: stderr: "" May 12 11:07:58.455: INFO: stdout: "" May 12 11:07:58.455: INFO: update-demo-nautilus-dx7mx is created but not running May 12 11:08:03.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972' May 12 11:08:03.891: INFO: stderr: "" May 12 11:08:03.891: INFO: stdout: "update-demo-nautilus-dx7mx update-demo-nautilus-hbnpn " May 12 11:08:03.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx7mx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:08:03.997: INFO: stderr: "" May 12 11:08:03.997: INFO: stdout: "true" May 12 11:08:03.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx7mx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:08:04.089: INFO: stderr: "" May 12 11:08:04.089: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:08:04.089: INFO: validating pod update-demo-nautilus-dx7mx May 12 11:08:04.095: INFO: got data: { "image": "nautilus.jpg" } May 12 11:08:04.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:08:04.095: INFO: update-demo-nautilus-dx7mx is verified up and running May 12 11:08:04.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:08:04.186: INFO: stderr: "" May 12 11:08:04.186: INFO: stdout: "true" May 12 11:08:04.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbnpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972' May 12 11:08:04.278: INFO: stderr: "" May 12 11:08:04.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:08:04.278: INFO: validating pod update-demo-nautilus-hbnpn May 12 11:08:04.281: INFO: got data: { "image": "nautilus.jpg" } May 12 11:08:04.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:08:04.281: INFO: update-demo-nautilus-hbnpn is verified up and running STEP: using delete to clean up resources May 12 11:08:04.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5972' May 12 11:08:04.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:08:04.384: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 11:08:04.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5972' May 12 11:08:04.488: INFO: stderr: "No resources found.\n" May 12 11:08:04.488: INFO: stdout: "" May 12 11:08:04.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5972 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 11:08:04.597: INFO: stderr: "" May 12 11:08:04.597: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:08:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5972" for this suite. May 12 11:08:26.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:08:26.729: INFO: namespace kubectl-5972 deletion completed in 22.08173518s • [SLOW TEST:64.811 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:08:26.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-fq95 STEP: Creating a pod to test atomic-volume-subpath May 12 11:08:27.040: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fq95" in namespace "subpath-5988" to be "success or failure" May 12 11:08:27.049: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.798121ms May 12 11:08:29.052: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011942148s May 12 11:08:31.060: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 4.020442739s May 12 11:08:33.065: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 6.025235034s May 12 11:08:35.069: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 8.029345219s May 12 11:08:37.073: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 10.03309461s May 12 11:08:39.077: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 12.037479088s May 12 11:08:41.081: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 14.04149782s May 12 11:08:43.085: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 16.045530807s May 12 11:08:45.089: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 18.04935414s May 12 11:08:47.234: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 20.194536277s May 12 11:08:49.239: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 22.199176906s May 12 11:08:51.243: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Running", Reason="", readiness=true. Elapsed: 24.203445969s May 12 11:08:53.248: INFO: Pod "pod-subpath-test-configmap-fq95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.20814415s STEP: Saw pod success May 12 11:08:53.248: INFO: Pod "pod-subpath-test-configmap-fq95" satisfied condition "success or failure" May 12 11:08:53.251: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-fq95 container test-container-subpath-configmap-fq95: STEP: delete the pod May 12 11:08:53.328: INFO: Waiting for pod pod-subpath-test-configmap-fq95 to disappear May 12 11:08:53.547: INFO: Pod pod-subpath-test-configmap-fq95 no longer exists STEP: Deleting pod pod-subpath-test-configmap-fq95 May 12 11:08:53.547: INFO: Deleting pod "pod-subpath-test-configmap-fq95" in namespace "subpath-5988" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:08:53.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5988" for this suite. May 12 11:08:59.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:08:59.660: INFO: namespace subpath-5988 deletion completed in 6.107819753s • [SLOW TEST:32.931 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:08:59.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-236a301d-7e7f-4a93-a0f4-cba133468309 STEP: Creating a pod to test consume secrets May 12 11:09:00.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0" in namespace "projected-6220" to be "success or failure" May 12 11:09:00.117: INFO: Pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061223ms May 12 11:09:02.169: INFO: Pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062443378s May 12 11:09:04.199: INFO: Pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09233147s May 12 11:09:06.204: INFO: Pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096969818s STEP: Saw pod success May 12 11:09:06.204: INFO: Pod "pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0" satisfied condition "success or failure" May 12 11:09:06.207: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0 container projected-secret-volume-test: STEP: delete the pod May 12 11:09:06.400: INFO: Waiting for pod pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0 to disappear May 12 11:09:06.529: INFO: Pod pod-projected-secrets-b78560d5-7f97-4231-81c6-4a3dec2d97d0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:06.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6220" for this suite. May 12 11:09:12.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:12.773: INFO: namespace projected-6220 deletion completed in 6.240134606s • [SLOW TEST:13.112 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:12.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 11:09:13.092: INFO: Waiting up to 5m0s for pod "pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5" in namespace "emptydir-1752" to be "success or failure" May 12 11:09:13.206: INFO: Pod "pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 114.237745ms May 12 11:09:15.210: INFO: Pod "pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117589701s May 12 11:09:17.212: INFO: Pod "pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120336344s STEP: Saw pod success May 12 11:09:17.212: INFO: Pod "pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5" satisfied condition "success or failure" May 12 11:09:17.214: INFO: Trying to get logs from node iruya-worker2 pod pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5 container test-container: STEP: delete the pod May 12 11:09:17.248: INFO: Waiting for pod pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5 to disappear May 12 11:09:17.274: INFO: Pod pod-091a5615-e3ab-41f8-9d16-c3c5fc055ad5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:17.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1752" for this suite. May 12 11:09:23.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:23.726: INFO: namespace emptydir-1752 deletion completed in 6.447976754s • [SLOW TEST:10.952 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:23.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 12 11:09:23.790: INFO: Waiting up to 5m0s for pod "client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f" in namespace "containers-9178" to be "success or failure" May 12 11:09:23.794: INFO: Pod "client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453815ms May 12 11:09:25.799: INFO: Pod "client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009262602s May 12 11:09:27.803: INFO: Pod "client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013639141s STEP: Saw pod success May 12 11:09:27.803: INFO: Pod "client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f" satisfied condition "success or failure" May 12 11:09:27.805: INFO: Trying to get logs from node iruya-worker2 pod client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f container test-container: STEP: delete the pod May 12 11:09:27.839: INFO: Waiting for pod client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f to disappear May 12 11:09:27.842: INFO: Pod client-containers-7e2beb91-0ad0-4367-9ea2-df51bfd1cd0f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:27.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9178" for this suite. May 12 11:09:33.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:33.928: INFO: namespace containers-9178 deletion completed in 6.083532127s • [SLOW TEST:10.203 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:33.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 11:09:33.991: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9880,SelfLink:/api/v1/namespaces/watch-9880/configmaps/e2e-watch-test-watch-closed,UID:8889482d-beba-4246-9c98-085135d74566,ResourceVersion:10463182,Generation:0,CreationTimestamp:2020-05-12 11:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:09:33.991: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9880,SelfLink:/api/v1/namespaces/watch-9880/configmaps/e2e-watch-test-watch-closed,UID:8889482d-beba-4246-9c98-085135d74566,ResourceVersion:10463183,Generation:0,CreationTimestamp:2020-05-12 11:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 11:09:34.027: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9880,SelfLink:/api/v1/namespaces/watch-9880/configmaps/e2e-watch-test-watch-closed,UID:8889482d-beba-4246-9c98-085135d74566,ResourceVersion:10463184,Generation:0,CreationTimestamp:2020-05-12 11:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:09:34.027: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9880,SelfLink:/api/v1/namespaces/watch-9880/configmaps/e2e-watch-test-watch-closed,UID:8889482d-beba-4246-9c98-085135d74566,ResourceVersion:10463185,Generation:0,CreationTimestamp:2020-05-12 11:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:34.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9880" for this suite. May 12 11:09:40.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:40.120: INFO: namespace watch-9880 deletion completed in 6.085831805s • [SLOW TEST:6.191 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:40.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0e9f5bfa-5624-432a-b26e-28a8be7ad41b STEP: Creating a pod to test consume secrets May 12 11:09:40.252: INFO: Waiting up to 5m0s for pod "pod-secrets-daff9353-9834-4477-baee-500a9cab9275" in namespace "secrets-2263" to be "success or failure" May 12 11:09:40.392: INFO: Pod "pod-secrets-daff9353-9834-4477-baee-500a9cab9275": Phase="Pending", Reason="", readiness=false. Elapsed: 139.430351ms May 12 11:09:42.396: INFO: Pod "pod-secrets-daff9353-9834-4477-baee-500a9cab9275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143235734s May 12 11:09:44.400: INFO: Pod "pod-secrets-daff9353-9834-4477-baee-500a9cab9275": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147317949s STEP: Saw pod success May 12 11:09:44.400: INFO: Pod "pod-secrets-daff9353-9834-4477-baee-500a9cab9275" satisfied condition "success or failure" May 12 11:09:44.403: INFO: Trying to get logs from node iruya-worker pod pod-secrets-daff9353-9834-4477-baee-500a9cab9275 container secret-volume-test: STEP: delete the pod May 12 11:09:45.136: INFO: Waiting for pod pod-secrets-daff9353-9834-4477-baee-500a9cab9275 to disappear May 12 11:09:45.157: INFO: Pod pod-secrets-daff9353-9834-4477-baee-500a9cab9275 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:45.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2263" for this suite. May 12 11:09:51.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:51.259: INFO: namespace secrets-2263 deletion completed in 6.099329154s • [SLOW TEST:11.139 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:51.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:09:51.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3951' May 12 11:09:51.481: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 11:09:51.481: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 12 11:09:51.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3951' May 12 11:09:51.736: INFO: stderr: "" May 12 11:09:51.736: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:09:51.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3951" for this suite. May 12 11:09:57.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:09:57.888: INFO: namespace kubectl-3951 deletion completed in 6.094903522s • [SLOW TEST:6.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:09:57.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 11:09:58.111: INFO: Waiting up to 5m0s for pod "pod-2430bc09-b55b-4041-b40a-0b6a9194b209" in namespace "emptydir-5554" to be "success or failure" May 12 11:09:58.148: INFO: Pod "pod-2430bc09-b55b-4041-b40a-0b6a9194b209": Phase="Pending", Reason="", readiness=false. Elapsed: 36.357674ms May 12 11:10:00.151: INFO: Pod "pod-2430bc09-b55b-4041-b40a-0b6a9194b209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039921981s May 12 11:10:02.155: INFO: Pod "pod-2430bc09-b55b-4041-b40a-0b6a9194b209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043444133s STEP: Saw pod success May 12 11:10:02.155: INFO: Pod "pod-2430bc09-b55b-4041-b40a-0b6a9194b209" satisfied condition "success or failure" May 12 11:10:02.157: INFO: Trying to get logs from node iruya-worker pod pod-2430bc09-b55b-4041-b40a-0b6a9194b209 container test-container: STEP: delete the pod May 12 11:10:02.171: INFO: Waiting for pod pod-2430bc09-b55b-4041-b40a-0b6a9194b209 to disappear May 12 11:10:02.175: INFO: Pod pod-2430bc09-b55b-4041-b40a-0b6a9194b209 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:10:02.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5554" for this suite. May 12 11:10:08.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:10:08.350: INFO: namespace emptydir-5554 deletion completed in 6.172401298s • [SLOW TEST:10.462 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:10:08.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 12 11:10:08.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1063' May 12 11:10:08.638: INFO: stderr: "" May 12 11:10:08.638: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:10:08.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1063' May 12 11:10:08.927: INFO: stderr: "" May 12 11:10:08.927: INFO: stdout: "update-demo-nautilus-7wrnw update-demo-nautilus-gxhgs " May 12 11:10:08.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wrnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:09.122: INFO: stderr: "" May 12 11:10:09.122: INFO: stdout: "" May 12 11:10:09.122: INFO: update-demo-nautilus-7wrnw is created but not running May 12 11:10:14.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1063' May 12 11:10:14.442: INFO: stderr: "" May 12 11:10:14.442: INFO: stdout: "update-demo-nautilus-7wrnw update-demo-nautilus-gxhgs " May 12 11:10:14.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wrnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:14.624: INFO: stderr: "" May 12 11:10:14.624: INFO: stdout: "true" May 12 11:10:14.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wrnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:14.733: INFO: stderr: "" May 12 11:10:14.734: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:10:14.734: INFO: validating pod update-demo-nautilus-7wrnw May 12 11:10:14.737: INFO: got data: { "image": "nautilus.jpg" } May 12 11:10:14.737: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:10:14.737: INFO: update-demo-nautilus-7wrnw is verified up and running May 12 11:10:14.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gxhgs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:14.823: INFO: stderr: "" May 12 11:10:14.823: INFO: stdout: "true" May 12 11:10:14.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gxhgs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:14.925: INFO: stderr: "" May 12 11:10:14.925: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 11:10:14.925: INFO: validating pod update-demo-nautilus-gxhgs May 12 11:10:14.928: INFO: got data: { "image": "nautilus.jpg" } May 12 11:10:14.928: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 11:10:14.928: INFO: update-demo-nautilus-gxhgs is verified up and running STEP: rolling-update to new replication controller May 12 11:10:14.930: INFO: scanned /root for discovery docs: May 12 11:10:14.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1063' May 12 11:10:43.362: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 11:10:43.362: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 11:10:43.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1063' May 12 11:10:43.453: INFO: stderr: "" May 12 11:10:43.453: INFO: stdout: "update-demo-kitten-46ng2 update-demo-kitten-jwglx " May 12 11:10:43.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-46ng2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:43.546: INFO: stderr: "" May 12 11:10:43.546: INFO: stdout: "true" May 12 11:10:43.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-46ng2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:43.639: INFO: stderr: "" May 12 11:10:43.639: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 11:10:43.639: INFO: validating pod update-demo-kitten-46ng2 May 12 11:10:43.643: INFO: got data: { "image": "kitten.jpg" } May 12 11:10:43.643: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 11:10:43.643: INFO: update-demo-kitten-46ng2 is verified up and running May 12 11:10:43.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwglx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:43.727: INFO: stderr: "" May 12 11:10:43.727: INFO: stdout: "true" May 12 11:10:43.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwglx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1063' May 12 11:10:43.806: INFO: stderr: "" May 12 11:10:43.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 11:10:43.806: INFO: validating pod update-demo-kitten-jwglx May 12 11:10:43.810: INFO: got data: { "image": "kitten.jpg" } May 12 11:10:43.810: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 11:10:43.810: INFO: update-demo-kitten-jwglx is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:10:43.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1063" for this suite. May 12 11:11:07.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:07.901: INFO: namespace kubectl-1063 deletion completed in 24.088707538s • [SLOW TEST:59.551 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:11:07.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-81f11273-218f-4d62-bc57-7f90f5cf9528 STEP: Creating a pod to test consume configMaps May 12 11:11:08.004: INFO: Waiting up to 5m0s for pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0" in namespace "configmap-2862" to be "success or failure" May 12 11:11:08.009: INFO: Pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305812ms May 12 11:11:10.012: INFO: Pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007871924s May 12 11:11:12.016: INFO: Pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012200688s May 12 11:11:14.020: INFO: Pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015955244s STEP: Saw pod success May 12 11:11:14.020: INFO: Pod "pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0" satisfied condition "success or failure" May 12 11:11:14.023: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0 container configmap-volume-test: STEP: delete the pod May 12 11:11:14.238: INFO: Waiting for pod pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0 to disappear May 12 11:11:14.248: INFO: Pod pod-configmaps-803b402e-4f6d-47ad-b955-ad01b779dde0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:11:14.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2862" for this suite. May 12 11:11:20.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:20.346: INFO: namespace configmap-2862 deletion completed in 6.09396776s • [SLOW TEST:12.444 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:11:20.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:11:20.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 12 11:11:20.605: INFO: stderr: "" May 12 11:11:20.605: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:11:20.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9606" for this suite. May 12 11:11:26.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:26.683: INFO: namespace kubectl-9606 deletion completed in 6.075083144s • [SLOW TEST:6.336 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:11:26.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 11:11:31.285: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b6eef452-e30a-4ec9-9a0e-e8f65dee0945" May 12 11:11:31.285: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b6eef452-e30a-4ec9-9a0e-e8f65dee0945" in namespace "pods-2727" to be "terminated due to deadline exceeded" May 12 11:11:31.300: INFO: Pod "pod-update-activedeadlineseconds-b6eef452-e30a-4ec9-9a0e-e8f65dee0945": Phase="Running", Reason="", readiness=true. Elapsed: 14.839329ms May 12 11:11:33.306: INFO: Pod "pod-update-activedeadlineseconds-b6eef452-e30a-4ec9-9a0e-e8f65dee0945": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020899048s May 12 11:11:33.306: INFO: Pod "pod-update-activedeadlineseconds-b6eef452-e30a-4ec9-9a0e-e8f65dee0945" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:11:33.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2727" for this suite. May 12 11:11:39.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:39.408: INFO: namespace pods-2727 deletion completed in 6.098421779s • [SLOW TEST:12.724 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:11:39.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 11:11:39.500: INFO: Waiting up to 5m0s for pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf" in namespace "downward-api-7808" to be "success or failure" May 12 11:11:39.510: INFO: Pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09635ms May 12 11:11:41.514: INFO: Pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014319468s May 12 11:11:43.519: INFO: Pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019077871s May 12 11:11:45.525: INFO: Pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02498862s STEP: Saw pod success May 12 11:11:45.525: INFO: Pod "downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf" satisfied condition "success or failure" May 12 11:11:45.528: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf container dapi-container: STEP: delete the pod May 12 11:11:45.599: INFO: Waiting for pod downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf to disappear May 12 11:11:45.633: INFO: Pod downward-api-3b6ec621-5029-4f0f-882f-96bd7ba92caf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:11:45.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7808" for this suite. May 12 11:11:51.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:11:51.723: INFO: namespace downward-api-7808 deletion completed in 6.084295568s • [SLOW TEST:12.315 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:11:51.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d3aa073b-4e09-43fa-a5e5-a869a5c35877 in namespace container-probe-7250 May 12 11:11:55.804: INFO: Started pod busybox-d3aa073b-4e09-43fa-a5e5-a869a5c35877 in namespace container-probe-7250 STEP: checking the pod's current state and verifying that restartCount is present May 12 11:11:55.807: INFO: Initial restart count of pod busybox-d3aa073b-4e09-43fa-a5e5-a869a5c35877 is 0 May 12 11:12:46.033: INFO: Restart count of pod container-probe-7250/busybox-d3aa073b-4e09-43fa-a5e5-a869a5c35877 is now 1 (50.225364145s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:12:46.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7250" for this suite. May 12 11:12:52.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:12:53.098: INFO: namespace container-probe-7250 deletion completed in 6.99215376s • [SLOW TEST:61.375 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:12:53.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 12 11:12:53.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 12 11:12:53.413: INFO: stderr: "" May 12 11:12:53.413: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:12:53.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8973" for this suite. May 12 11:13:01.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:13:01.738: INFO: namespace kubectl-8973 deletion completed in 8.132436045s • [SLOW TEST:8.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:13:01.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 11:13:02.641: INFO: Waiting up to 5m0s for pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527" in namespace "emptydir-7798" to be "success or failure" May 12 11:13:02.807: INFO: Pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527": Phase="Pending", Reason="", readiness=false. Elapsed: 165.981748ms May 12 11:13:04.850: INFO: Pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208475777s May 12 11:13:06.854: INFO: Pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212752952s May 12 11:13:08.858: INFO: Pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21681787s STEP: Saw pod success May 12 11:13:08.858: INFO: Pod "pod-812633ad-5f06-43bd-aeb1-4ed2a612c527" satisfied condition "success or failure" May 12 11:13:08.860: INFO: Trying to get logs from node iruya-worker2 pod pod-812633ad-5f06-43bd-aeb1-4ed2a612c527 container test-container: STEP: delete the pod May 12 11:13:08.898: INFO: Waiting for pod pod-812633ad-5f06-43bd-aeb1-4ed2a612c527 to disappear May 12 11:13:08.909: INFO: Pod pod-812633ad-5f06-43bd-aeb1-4ed2a612c527 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:13:08.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7798" for this suite. May 12 11:13:14.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:13:14.981: INFO: namespace emptydir-7798 deletion completed in 6.068699394s • [SLOW TEST:13.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:13:14.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9776 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:13:15.079: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:13:47.453: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.188:8080/dial?request=hostName&protocol=http&host=10.244.1.184&port=8080&tries=1'] Namespace:pod-network-test-9776 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:13:47.453: INFO: >>> kubeConfig: /root/.kube/config I0512 11:13:47.483748 6 log.go:172] (0xc001c78790) (0xc0023feaa0) Create stream I0512 11:13:47.483783 6 log.go:172] (0xc001c78790) (0xc0023feaa0) Stream added, broadcasting: 1 I0512 11:13:47.485807 6 log.go:172] (0xc001c78790) Reply frame received for 1 I0512 11:13:47.485846 6 log.go:172] (0xc001c78790) (0xc0023feb40) Create stream I0512 11:13:47.485857 6 log.go:172] (0xc001c78790) (0xc0023feb40) Stream added, broadcasting: 3 I0512 11:13:47.486742 6 log.go:172] (0xc001c78790) Reply frame received for 3 I0512 11:13:47.486775 6 log.go:172] (0xc001c78790) (0xc001b6f540) Create stream I0512 11:13:47.486799 6 log.go:172] (0xc001c78790) (0xc001b6f540) Stream added, broadcasting: 5 I0512 11:13:47.487578 6 log.go:172] (0xc001c78790) Reply frame received for 5 I0512 11:13:47.546147 6 log.go:172] (0xc001c78790) Data frame received for 3 I0512 11:13:47.546182 6 log.go:172] (0xc0023feb40) (3) Data frame handling I0512 11:13:47.546196 6 log.go:172] (0xc0023feb40) (3) Data frame sent I0512 11:13:47.546719 6 log.go:172] (0xc001c78790) Data frame received for 5 I0512 11:13:47.546740 6 log.go:172] (0xc001b6f540) (5) Data frame handling I0512 11:13:47.546778 6 log.go:172] (0xc001c78790) Data frame received for 3 I0512 11:13:47.546804 6 log.go:172] (0xc0023feb40) (3) Data frame handling I0512 11:13:47.548054 6 log.go:172] (0xc001c78790) Data frame received for 1 I0512 11:13:47.548089 6 log.go:172] (0xc0023feaa0) (1) Data frame handling I0512 11:13:47.548106 6 log.go:172] (0xc0023feaa0) (1) Data frame sent I0512 11:13:47.548117 6 log.go:172] (0xc001c78790) (0xc0023feaa0) Stream removed, broadcasting: 1 I0512 11:13:47.548139 6 log.go:172] (0xc001c78790) Go away received I0512 11:13:47.548181 6 log.go:172] (0xc001c78790) (0xc0023feaa0) Stream removed, broadcasting: 1 I0512 11:13:47.548195 6 log.go:172] (0xc001c78790) (0xc0023feb40) Stream removed, broadcasting: 3 I0512 11:13:47.548204 6 log.go:172] (0xc001c78790) (0xc001b6f540) Stream removed, broadcasting: 5 May 12 11:13:47.548: INFO: Waiting for endpoints: map[] May 12 11:13:47.551: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.188:8080/dial?request=hostName&protocol=http&host=10.244.2.187&port=8080&tries=1'] Namespace:pod-network-test-9776 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:13:47.551: INFO: >>> kubeConfig: /root/.kube/config I0512 11:13:47.579447 6 log.go:172] (0xc001be8790) (0xc003338d20) Create stream I0512 11:13:47.579472 6 log.go:172] (0xc001be8790) (0xc003338d20) Stream added, broadcasting: 1 I0512 11:13:47.581322 6 log.go:172] (0xc001be8790) Reply frame received for 1 I0512 11:13:47.581363 6 log.go:172] (0xc001be8790) (0xc0029054a0) Create stream I0512 11:13:47.581379 6 log.go:172] (0xc001be8790) (0xc0029054a0) Stream added, broadcasting: 3 I0512 11:13:47.582373 6 log.go:172] (0xc001be8790) Reply frame received for 3 I0512 11:13:47.582395 6 log.go:172] (0xc001be8790) (0xc003338dc0) Create stream I0512 11:13:47.582405 6 log.go:172] (0xc001be8790) (0xc003338dc0) Stream added, broadcasting: 5 I0512 11:13:47.583292 6 log.go:172] (0xc001be8790) Reply frame received for 5 I0512 11:13:47.643580 6 log.go:172] (0xc001be8790) Data frame received for 3 I0512 11:13:47.643614 6 log.go:172] (0xc0029054a0) (3) Data frame handling I0512 11:13:47.643631 6 log.go:172] (0xc0029054a0) (3) Data frame sent I0512 11:13:47.644266 6 log.go:172] (0xc001be8790) Data frame received for 3 I0512 11:13:47.644285 6 log.go:172] (0xc0029054a0) (3) Data frame handling I0512 11:13:47.644317 6 log.go:172] (0xc001be8790) Data frame received for 5 I0512 11:13:47.644343 6 log.go:172] (0xc003338dc0) (5) Data frame handling I0512 11:13:47.645768 6 log.go:172] (0xc001be8790) Data frame received for 1 I0512 11:13:47.645779 6 log.go:172] (0xc003338d20) (1) Data frame handling I0512 11:13:47.645785 6 log.go:172] (0xc003338d20) (1) Data frame sent I0512 11:13:47.645915 6 log.go:172] (0xc001be8790) (0xc003338d20) Stream removed, broadcasting: 1 I0512 11:13:47.645990 6 log.go:172] (0xc001be8790) Go away received I0512 11:13:47.646024 6 log.go:172] (0xc001be8790) (0xc003338d20) Stream removed, broadcasting: 1 I0512 11:13:47.646045 6 log.go:172] (0xc001be8790) (0xc0029054a0) Stream removed, broadcasting: 3 I0512 11:13:47.646070 6 log.go:172] (0xc001be8790) (0xc003338dc0) Stream removed, broadcasting: 5 May 12 11:13:47.646: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:13:47.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9776" for this suite. May 12 11:14:13.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:14:13.811: INFO: namespace pod-network-test-9776 deletion completed in 26.161299398s • [SLOW TEST:58.830 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:14:13.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:14:13.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663" in namespace "downward-api-3013" to be "success or failure" May 12 11:14:13.884: INFO: Pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466563ms May 12 11:14:15.889: INFO: Pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00918534s May 12 11:14:18.025: INFO: Pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663": Phase="Running", Reason="", readiness=true. Elapsed: 4.144626227s May 12 11:14:20.028: INFO: Pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148399669s STEP: Saw pod success May 12 11:14:20.028: INFO: Pod "downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663" satisfied condition "success or failure" May 12 11:14:20.031: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663 container client-container: STEP: delete the pod May 12 11:14:20.066: INFO: Waiting for pod downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663 to disappear May 12 11:14:20.090: INFO: Pod downwardapi-volume-9240d2d9-2732-48f7-b630-e2c538906663 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:14:20.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3013" for this suite. May 12 11:14:26.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:14:26.235: INFO: namespace downward-api-3013 deletion completed in 6.140696328s • [SLOW TEST:12.424 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:14:26.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 11:14:26.288: INFO: PodSpec: initContainers in spec.initContainers May 12 11:15:20.004: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b8b98aeb-5ba2-4675-b1e5-7a34c5866f8b", GenerateName:"", Namespace:"init-container-1354", SelfLink:"/api/v1/namespaces/init-container-1354/pods/pod-init-b8b98aeb-5ba2-4675-b1e5-7a34c5866f8b", UID:"61f3ef9a-bcdf-4fe0-bcd2-c06f29ce2692", ResourceVersion:"10464333", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724878866, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"288802053"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4l6v8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0018d4480), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6v8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6v8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4l6v8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026604b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f700c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002660550)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002660570)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002660578), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00266057c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724878866, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724878866, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724878866, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724878866, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.190", StartTime:(*v1.Time)(0xc002296460), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0022964a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00257c770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://535e011e81b390ea5fd9db3738d7a941142c86851f4a1fc1e851107a71f78d1f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022964c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002296480), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:15:20.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1354" for this suite. May 12 11:15:46.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:15:46.639: INFO: namespace init-container-1354 deletion completed in 26.601890176s • [SLOW TEST:80.404 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:15:46.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:16:47.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2199" for this suite. May 12 11:17:12.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:17:12.403: INFO: namespace container-probe-2199 deletion completed in 25.191496121s • [SLOW TEST:85.764 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:17:12.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 11:17:13.050: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 11:17:18.212: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:17:19.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8939" for this suite. May 12 11:17:27.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:17:27.364: INFO: namespace replication-controller-8939 deletion completed in 8.101989448s • [SLOW TEST:14.961 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:17:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3550 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3550 to expose endpoints map[] May 12 11:17:27.817: INFO: Get endpoints failed (101.292507ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 11:17:28.820: INFO: successfully validated that service endpoint-test2 in namespace services-3550 exposes endpoints map[] (1.104196089s elapsed) STEP: Creating pod pod1 in namespace services-3550 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3550 to expose endpoints map[pod1:[80]] May 12 11:17:33.112: INFO: successfully validated that service endpoint-test2 in namespace services-3550 exposes endpoints map[pod1:[80]] (4.28614747s elapsed) STEP: Creating pod pod2 in namespace services-3550 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3550 to expose endpoints map[pod1:[80] pod2:[80]] May 12 11:17:37.535: INFO: successfully validated that service endpoint-test2 in namespace services-3550 exposes endpoints map[pod1:[80] pod2:[80]] (4.420001769s elapsed) STEP: Deleting pod pod1 in namespace services-3550 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3550 to expose endpoints map[pod2:[80]] May 12 11:17:37.574: INFO: successfully validated that service endpoint-test2 in namespace services-3550 exposes endpoints map[pod2:[80]] (33.739657ms elapsed) STEP: Deleting pod pod2 in namespace services-3550 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3550 to expose endpoints map[] May 12 11:17:37.601: INFO: successfully validated that service endpoint-test2 in namespace services-3550 exposes endpoints map[] (22.297198ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:17:38.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3550" for this suite. May 12 11:18:02.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:18:02.583: INFO: namespace services-3550 deletion completed in 24.117788811s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:35.218 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:18:02.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 11:18:16.992: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:18:17.012: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:18:19.012: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:18:19.015: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:18:21.012: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:18:21.015: INFO: Pod pod-with-prestop-http-hook still exists May 12 11:18:23.012: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 11:18:23.016: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:18:23.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1267" for this suite. May 12 11:18:45.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:18:45.518: INFO: namespace container-lifecycle-hook-1267 deletion completed in 22.491930631s • [SLOW TEST:42.935 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:18:45.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-5p8l STEP: Creating a pod to test atomic-volume-subpath May 12 11:18:45.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5p8l" in namespace "subpath-1629" to be "success or failure" May 12 11:18:45.692: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Pending", Reason="", readiness=false. Elapsed: 28.223549ms May 12 11:18:47.907: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243451263s May 12 11:18:49.911: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 4.246626665s May 12 11:18:51.914: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 6.250298272s May 12 11:18:53.998: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 8.334055565s May 12 11:18:56.001: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 10.337499343s May 12 11:18:58.006: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 12.341695398s May 12 11:19:00.010: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 14.345693173s May 12 11:19:02.111: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 16.447115011s May 12 11:19:04.114: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 18.450302566s May 12 11:19:06.264: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 20.600209645s May 12 11:19:08.400: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 22.735672325s May 12 11:19:10.435: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Running", Reason="", readiness=true. Elapsed: 24.771250607s May 12 11:19:12.439: INFO: Pod "pod-subpath-test-downwardapi-5p8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.77524429s STEP: Saw pod success May 12 11:19:12.439: INFO: Pod "pod-subpath-test-downwardapi-5p8l" satisfied condition "success or failure" May 12 11:19:12.442: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-5p8l container test-container-subpath-downwardapi-5p8l: STEP: delete the pod May 12 11:19:13.299: INFO: Waiting for pod pod-subpath-test-downwardapi-5p8l to disappear May 12 11:19:13.531: INFO: Pod pod-subpath-test-downwardapi-5p8l no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5p8l May 12 11:19:13.531: INFO: Deleting pod "pod-subpath-test-downwardapi-5p8l" in namespace "subpath-1629" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:19:13.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1629" for this suite. May 12 11:19:23.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:19:23.622: INFO: namespace subpath-1629 deletion completed in 10.086592591s • [SLOW TEST:38.104 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:19:23.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:19:23.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7" in namespace "projected-1775" to be "success or failure" May 12 11:19:23.825: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.954819ms May 12 11:19:25.828: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054286476s May 12 11:19:27.832: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05830203s May 12 11:19:30.350: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576543207s May 12 11:19:32.354: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579841529s May 12 11:19:34.357: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583535358s May 12 11:19:36.436: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.662175098s May 12 11:19:38.670: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.896476689s STEP: Saw pod success May 12 11:19:38.670: INFO: Pod "downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7" satisfied condition "success or failure" May 12 11:19:38.754: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7 container client-container: STEP: delete the pod May 12 11:19:40.426: INFO: Waiting for pod downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7 to disappear May 12 11:19:41.127: INFO: Pod downwardapi-volume-9056c966-cc7b-4d5c-b94a-7fb4dc57a9e7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:19:41.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1775" for this suite. May 12 11:19:50.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:19:50.788: INFO: namespace projected-1775 deletion completed in 8.769764547s • [SLOW TEST:27.166 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:19:50.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 12 11:19:51.013: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 12 11:19:51.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:07.869: INFO: stderr: "" May 12 11:20:07.869: INFO: stdout: "service/redis-slave created\n" May 12 11:20:07.869: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 12 11:20:07.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:08.332: INFO: stderr: "" May 12 11:20:08.333: INFO: stdout: "service/redis-master created\n" May 12 11:20:08.333: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 11:20:08.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:08.727: INFO: stderr: "" May 12 11:20:08.727: INFO: stdout: "service/frontend created\n" May 12 11:20:08.728: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 12 11:20:08.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:09.096: INFO: stderr: "" May 12 11:20:09.096: INFO: stdout: "deployment.apps/frontend created\n" May 12 11:20:09.096: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 11:20:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:09.598: INFO: stderr: "" May 12 11:20:09.598: INFO: stdout: "deployment.apps/redis-master created\n" May 12 11:20:09.598: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 12 11:20:09.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7771' May 12 11:20:10.157: INFO: stderr: "" May 12 11:20:10.157: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 12 11:20:10.157: INFO: Waiting for all frontend pods to be Running. May 12 11:20:25.208: INFO: Waiting for frontend to serve content. May 12 11:20:25.261: INFO: Trying to add a new entry to the guestbook. May 12 11:20:26.320: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 12 11:20:26.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:26.625: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:26.625: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 12 11:20:26.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:27.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:27.106: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 11:20:27.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:27.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:27.238: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 11:20:27.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:27.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:27.341: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 11:20:27.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:27.590: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:27.590: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 11:20:27.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7771' May 12 11:20:27.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 11:20:27.939: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:20:27.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7771" for this suite. May 12 11:21:11.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:21:11.142: INFO: namespace kubectl-7771 deletion completed in 42.803590432s • [SLOW TEST:80.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:21:11.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:21:35.412: INFO: Container started at 2020-05-12 11:21:16 +0000 UTC, pod became ready at 2020-05-12 11:21:35 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:21:35.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6271" for this suite. May 12 11:21:59.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:21:59.522: INFO: namespace container-probe-6271 deletion completed in 24.107241088s • [SLOW TEST:48.381 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:21:59.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1335 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1335 STEP: Deleting pre-stop pod May 12 11:22:19.912: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:22:19.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1335" for this suite. May 12 11:23:02.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:23:02.442: INFO: namespace prestop-1335 deletion completed in 42.326956265s • [SLOW TEST:62.919 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:23:02.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:23:02.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f" in namespace "downward-api-1357" to be "success or failure" May 12 11:23:02.685: INFO: Pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.216396ms May 12 11:23:04.688: INFO: Pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049637543s May 12 11:23:06.692: INFO: Pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053009589s May 12 11:23:08.695: INFO: Pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056287143s STEP: Saw pod success May 12 11:23:08.695: INFO: Pod "downwardapi-volume-77daac48-832b-4402-9455-954d708f128f" satisfied condition "success or failure" May 12 11:23:08.698: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-77daac48-832b-4402-9455-954d708f128f container client-container: STEP: delete the pod May 12 11:23:08.749: INFO: Waiting for pod downwardapi-volume-77daac48-832b-4402-9455-954d708f128f to disappear May 12 11:23:08.786: INFO: Pod downwardapi-volume-77daac48-832b-4402-9455-954d708f128f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:23:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1357" for this suite. May 12 11:23:16.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:23:16.877: INFO: namespace downward-api-1357 deletion completed in 8.063747981s • [SLOW TEST:14.434 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:23:16.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 12 11:23:23.044: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 12 11:23:33.221: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:23:33.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5801" for this suite. May 12 11:23:39.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:23:39.533: INFO: namespace pods-5801 deletion completed in 6.30427243s • [SLOW TEST:22.655 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:23:39.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 12 11:23:39.933: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 11:23:39.941: INFO: Waiting for terminating namespaces to be deleted... May 12 11:23:39.944: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 12 11:23:39.948: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 11:23:39.948: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:23:39.948: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 12 11:23:39.948: INFO: Container kindnet-cni ready: true, restart count 0 May 12 11:23:39.948: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 12 11:23:39.953: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 12 11:23:39.953: INFO: Container coredns ready: true, restart count 0 May 12 11:23:39.953: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 12 11:23:39.953: INFO: Container coredns ready: true, restart count 0 May 12 11:23:39.953: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 12 11:23:39.953: INFO: Container kube-proxy ready: true, restart count 0 May 12 11:23:39.953: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 12 11:23:39.953: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 12 11:23:40.174: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 12 11:23:40.174: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 12 11:23:40.174: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 12 11:23:40.174: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 12 11:23:40.174: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 12 11:23:40.174: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9.160e441f6ed9f281], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8130/filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9.160e441fe3dfdadb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9.160e442119f5c2da], Reason = [Created], Message = [Created container filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9.160e44213e2d7337], Reason = [Started], Message = [Started container filler-pod-f0c7a96b-8de8-4ec1-8e11-95e62a3d16b9] STEP: Considering event: Type = [Normal], Name = [filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa.160e441f6fe69925], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8130/filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa.160e44203edc2459], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa.160e44213e2e0d66], Reason = [Created], Message = [Created container filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa] STEP: Considering event: Type = [Normal], Name = [filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa.160e442153004d8e], Reason = [Started], Message = [Started container filler-pod-f46d10c3-82e4-4872-9396-91e57eb873aa] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e4421d12c7319], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:23:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8130" for this suite. May 12 11:24:02.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:02.652: INFO: namespace sched-pred-8130 deletion completed in 10.205088331s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:23.119 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:24:02.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b67c7663-cd6b-43de-8d5a-c833935afae7 STEP: Creating a pod to test consume configMaps May 12 11:24:04.095: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862" in namespace "projected-4278" to be "success or failure" May 12 11:24:04.161: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862": Phase="Pending", Reason="", readiness=false. Elapsed: 66.554362ms May 12 11:24:06.283: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188658754s May 12 11:24:08.410: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314882844s May 12 11:24:10.413: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318223508s May 12 11:24:12.418: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.323599087s STEP: Saw pod success May 12 11:24:12.418: INFO: Pod "pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862" satisfied condition "success or failure" May 12 11:24:12.420: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862 container projected-configmap-volume-test: STEP: delete the pod May 12 11:24:13.037: INFO: Waiting for pod pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862 to disappear May 12 11:24:13.205: INFO: Pod pod-projected-configmaps-e9d0a2e0-2179-4b7e-8971-205323a73862 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:24:13.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4278" for this suite. May 12 11:24:19.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:19.434: INFO: namespace projected-4278 deletion completed in 6.226248997s • [SLOW TEST:16.782 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:24:19.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 11:24:19.785: INFO: Waiting up to 5m0s for pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db" in namespace "emptydir-4265" to be "success or failure" May 12 11:24:19.830: INFO: Pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db": Phase="Pending", Reason="", readiness=false. Elapsed: 44.403299ms May 12 11:24:21.832: INFO: Pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046768526s May 12 11:24:23.990: INFO: Pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db": Phase="Running", Reason="", readiness=true. Elapsed: 4.20504637s May 12 11:24:25.995: INFO: Pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209131107s STEP: Saw pod success May 12 11:24:25.995: INFO: Pod "pod-f929317b-c5aa-42b5-914e-3097fcee14db" satisfied condition "success or failure" May 12 11:24:25.997: INFO: Trying to get logs from node iruya-worker2 pod pod-f929317b-c5aa-42b5-914e-3097fcee14db container test-container: STEP: delete the pod May 12 11:24:26.195: INFO: Waiting for pod pod-f929317b-c5aa-42b5-914e-3097fcee14db to disappear May 12 11:24:26.409: INFO: Pod pod-f929317b-c5aa-42b5-914e-3097fcee14db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:24:26.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4265" for this suite. May 12 11:24:34.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:34.510: INFO: namespace emptydir-4265 deletion completed in 8.097071095s • [SLOW TEST:15.077 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:24:34.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:24:34.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96" in namespace "projected-8421" to be "success or failure" May 12 11:24:34.865: INFO: Pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96": Phase="Pending", Reason="", readiness=false. Elapsed: 181.867448ms May 12 11:24:37.039: INFO: Pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356178519s May 12 11:24:39.042: INFO: Pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96": Phase="Running", Reason="", readiness=true. Elapsed: 4.359079806s May 12 11:24:41.046: INFO: Pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363510841s STEP: Saw pod success May 12 11:24:41.047: INFO: Pod "downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96" satisfied condition "success or failure" May 12 11:24:41.050: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96 container client-container: STEP: delete the pod May 12 11:24:41.100: INFO: Waiting for pod downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96 to disappear May 12 11:24:41.158: INFO: Pod downwardapi-volume-84478356-df80-4402-a3a0-e39c35a6ba96 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:24:41.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8421" for this suite. May 12 11:24:49.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:24:49.581: INFO: namespace projected-8421 deletion completed in 8.419070788s • [SLOW TEST:15.070 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:24:49.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-bc2b9a48-f825-4a55-ba91-3f9df7c77324 STEP: Creating a pod to test consume secrets May 12 11:24:49.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10" in namespace "projected-8334" to be "success or failure" May 12 11:24:49.872: INFO: Pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10": Phase="Pending", Reason="", readiness=false. Elapsed: 29.341644ms May 12 11:24:51.876: INFO: Pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0338262s May 12 11:24:53.966: INFO: Pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123951866s May 12 11:24:56.213: INFO: Pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370103876s STEP: Saw pod success May 12 11:24:56.213: INFO: Pod "pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10" satisfied condition "success or failure" May 12 11:24:56.584: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10 container projected-secret-volume-test: STEP: delete the pod May 12 11:24:56.795: INFO: Waiting for pod pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10 to disappear May 12 11:24:56.997: INFO: Pod pod-projected-secrets-8243d41f-05f3-45e9-bc8c-8962e0d18d10 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:24:56.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8334" for this suite. May 12 11:25:05.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:05.236: INFO: namespace projected-8334 deletion completed in 8.235712462s • [SLOW TEST:15.655 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:25:05.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 11:25:13.411: INFO: DNS probes using dns-9265/dns-test-d76f6fc6-a83e-41ef-a112-b3b4edb16075 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:25:13.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9265" for this suite. May 12 11:25:21.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:21.918: INFO: namespace dns-9265 deletion completed in 8.385682896s • [SLOW TEST:16.681 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:25:21.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 11:25:22.585: INFO: Waiting up to 5m0s for pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d" in namespace "downward-api-6305" to be "success or failure" May 12 11:25:22.660: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 75.437154ms May 12 11:25:24.782: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197008671s May 12 11:25:26.785: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200717277s May 12 11:25:28.790: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d": Phase="Running", Reason="", readiness=true. Elapsed: 6.204946014s May 12 11:25:30.793: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208822406s STEP: Saw pod success May 12 11:25:30.793: INFO: Pod "downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d" satisfied condition "success or failure" May 12 11:25:30.796: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d container dapi-container: STEP: delete the pod May 12 11:25:30.866: INFO: Waiting for pod downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d to disappear May 12 11:25:30.876: INFO: Pod downward-api-bc55a53a-1c5d-4d72-82e9-48084d842e0d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:25:30.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6305" for this suite. May 12 11:25:38.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:25:39.010: INFO: namespace downward-api-6305 deletion completed in 8.102523327s • [SLOW TEST:17.092 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:25:39.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:25:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9069" for this suite. May 12 11:26:25.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:26:25.324: INFO: namespace kubelet-test-9069 deletion completed in 42.233899733s • [SLOW TEST:46.313 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:26:25.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:26:26.335: INFO: Creating ReplicaSet my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2 May 12 11:26:26.584: INFO: Pod name my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2: Found 0 pods out of 1 May 12 11:26:31.735: INFO: Pod name my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2: Found 1 pods out of 1 May 12 11:26:31.735: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2" is running May 12 11:26:33.743: INFO: Pod "my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2-lvkv8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 11:26:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 11:26:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 11:26:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 11:26:26 +0000 UTC Reason: Message:}]) May 12 11:26:33.743: INFO: Trying to dial the pod May 12 11:26:38.751: INFO: Controller my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2: Got expected result from replica 1 [my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2-lvkv8]: "my-hostname-basic-e0fef061-2ab6-4688-8e26-173fcb77bcf2-lvkv8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:26:38.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4319" for this suite. May 12 11:26:44.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:26:44.896: INFO: namespace replicaset-4319 deletion completed in 6.141946331s • [SLOW TEST:19.572 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:26:44.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8a8a57be-3d12-4d0b-8e8a-30d15f76dd18 STEP: Creating a pod to test consume configMaps May 12 11:26:44.959: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2" in namespace "projected-5321" to be "success or failure" May 12 11:26:44.973: INFO: Pod "pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.151493ms May 12 11:26:46.976: INFO: Pod "pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016601352s May 12 11:26:49.034: INFO: Pod "pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074625578s STEP: Saw pod success May 12 11:26:49.034: INFO: Pod "pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2" satisfied condition "success or failure" May 12 11:26:49.036: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2 container projected-configmap-volume-test: STEP: delete the pod May 12 11:26:49.341: INFO: Waiting for pod pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2 to disappear May 12 11:26:49.519: INFO: Pod pod-projected-configmaps-49950940-4914-4d70-971a-d1cfb584f1a2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:26:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5321" for this suite. May 12 11:26:57.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:26:57.668: INFO: namespace projected-5321 deletion completed in 8.144664292s • [SLOW TEST:12.771 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:26:57.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 11:26:57.987: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466494,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 11:26:57.987: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466495,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 11:26:57.987: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466497,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 11:27:08.122: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466519,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 11:27:08.122: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466520,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 12 11:27:08.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-label-changed,UID:8ec8b58c-89c7-4444-a5f7-35630e84e048,ResourceVersion:10466521,Generation:0,CreationTimestamp:2020-05-12 11:26:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:27:08.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9982" for this suite. May 12 11:27:14.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:27:14.729: INFO: namespace watch-9982 deletion completed in 6.446437151s • [SLOW TEST:17.060 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:27:14.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 11:27:14.789: INFO: Waiting up to 5m0s for pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6" in namespace "emptydir-3569" to be "success or failure" May 12 11:27:14.832: INFO: Pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.929248ms May 12 11:27:17.215: INFO: Pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426476776s May 12 11:27:19.244: INFO: Pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6": Phase="Running", Reason="", readiness=true. Elapsed: 4.455236449s May 12 11:27:21.248: INFO: Pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.458883249s STEP: Saw pod success May 12 11:27:21.248: INFO: Pod "pod-57849219-2546-4bd4-afb7-5622528e4fe6" satisfied condition "success or failure" May 12 11:27:21.250: INFO: Trying to get logs from node iruya-worker2 pod pod-57849219-2546-4bd4-afb7-5622528e4fe6 container test-container: STEP: delete the pod May 12 11:27:21.267: INFO: Waiting for pod pod-57849219-2546-4bd4-afb7-5622528e4fe6 to disappear May 12 11:27:21.272: INFO: Pod pod-57849219-2546-4bd4-afb7-5622528e4fe6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:27:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3569" for this suite. May 12 11:27:27.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:27:27.376: INFO: namespace emptydir-3569 deletion completed in 6.101483367s • [SLOW TEST:12.647 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:27:27.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:27:27.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3161' May 12 11:27:27.510: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 11:27:27.510: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller May 12 11:27:27.543: INFO: scanned /root for discovery docs: May 12 11:27:27.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3161' May 12 11:27:46.754: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 11:27:46.754: INFO: stdout: "Created e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde\nScaling up e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 12 11:27:46.754: INFO: stdout: "Created e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde\nScaling up e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 12 11:27:46.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3161' May 12 11:27:46.917: INFO: stderr: "" May 12 11:27:46.917: INFO: stdout: "e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde-4x6p4 " May 12 11:27:46.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde-4x6p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3161' May 12 11:27:47.003: INFO: stderr: "" May 12 11:27:47.003: INFO: stdout: "true" May 12 11:27:47.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde-4x6p4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3161' May 12 11:27:47.085: INFO: stderr: "" May 12 11:27:47.085: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 12 11:27:47.085: INFO: e2e-test-nginx-rc-c10a5c3bd6b3e8c0bf16a06dc84cbbde-4x6p4 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 12 11:27:47.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3161' May 12 11:27:47.177: INFO: stderr: "" May 12 11:27:47.177: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:27:47.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3161" for this suite. May 12 11:27:53.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:27:53.409: INFO: namespace kubectl-3161 deletion completed in 6.121603394s • [SLOW TEST:26.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:27:53.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 11:27:53.461: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:28:02.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1757" for this suite. May 12 11:28:08.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:28:08.622: INFO: namespace init-container-1757 deletion completed in 6.190861553s • [SLOW TEST:15.213 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:28:08.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 11:28:49.644730 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:28:49.644: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:28:49.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9716" for this suite. May 12 11:29:05.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:29:05.778: INFO: namespace gc-9716 deletion completed in 16.129810175s • [SLOW TEST:57.155 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:29:05.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 11:29:16.000: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:16.006: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:18.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:18.010: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:20.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:20.010: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:22.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:22.009: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:24.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:24.009: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:26.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:26.545: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:28.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:28.139: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:30.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:30.010: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:32.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:32.009: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:34.007: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:34.055: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:36.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:36.010: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:38.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:38.090: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:40.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:40.009: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:42.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:42.011: INFO: Pod pod-with-prestop-exec-hook still exists May 12 11:29:44.006: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 11:29:44.010: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:29:44.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5678" for this suite. May 12 11:30:06.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:30:06.115: INFO: namespace container-lifecycle-hook-5678 deletion completed in 22.094522178s • [SLOW TEST:60.336 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:30:06.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 12 11:30:06.170: INFO: Waiting up to 5m0s for pod "client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9" in namespace "containers-8157" to be "success or failure" May 12 11:30:06.228: INFO: Pod "client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9": Phase="Pending", Reason="", readiness=false. Elapsed: 57.789253ms May 12 11:30:08.643: INFO: Pod "client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47197552s May 12 11:30:10.646: INFO: Pod "client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.47570718s STEP: Saw pod success May 12 11:30:10.646: INFO: Pod "client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9" satisfied condition "success or failure" May 12 11:30:10.648: INFO: Trying to get logs from node iruya-worker2 pod client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9 container test-container: STEP: delete the pod May 12 11:30:10.674: INFO: Waiting for pod client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9 to disappear May 12 11:30:10.684: INFO: Pod client-containers-f9f70b5b-bcf7-41cc-9137-62a9fbe033a9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:30:10.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8157" for this suite. May 12 11:30:16.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:30:16.854: INFO: namespace containers-8157 deletion completed in 6.167723152s • [SLOW TEST:10.739 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:30:16.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 11:30:21.968: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:30:22.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9598" for this suite. May 12 11:30:28.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:30:28.130: INFO: namespace container-runtime-9598 deletion completed in 6.077033547s • [SLOW TEST:11.275 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:30:28.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:30:28.310: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:28.314: INFO: Number of nodes with available pods: 0 May 12 11:30:28.314: INFO: Node iruya-worker is running more than one daemon pod May 12 11:30:29.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:29.323: INFO: Number of nodes with available pods: 0 May 12 11:30:29.323: INFO: Node iruya-worker is running more than one daemon pod May 12 11:30:30.517: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:30.520: INFO: Number of nodes with available pods: 0 May 12 11:30:30.520: INFO: Node iruya-worker is running more than one daemon pod May 12 11:30:31.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:31.321: INFO: Number of nodes with available pods: 0 May 12 11:30:31.321: INFO: Node iruya-worker is running more than one daemon pod May 12 11:30:32.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:32.494: INFO: Number of nodes with available pods: 0 May 12 11:30:32.494: INFO: Node iruya-worker is running more than one daemon pod May 12 11:30:33.346: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:33.402: INFO: Number of nodes with available pods: 1 May 12 11:30:33.402: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:30:34.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:34.360: INFO: Number of nodes with available pods: 2 May 12 11:30:34.360: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 11:30:34.411: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:30:34.423: INFO: Number of nodes with available pods: 2 May 12 11:30:34.423: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4138, will wait for the garbage collector to delete the pods May 12 11:30:35.619: INFO: Deleting DaemonSet.extensions daemon-set took: 6.897847ms May 12 11:30:36.919: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300209858s May 12 11:30:51.923: INFO: Number of nodes with available pods: 0 May 12 11:30:51.923: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:30:51.925: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4138/daemonsets","resourceVersion":"10467423"},"items":null} May 12 11:30:51.928: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4138/pods","resourceVersion":"10467423"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:30:51.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4138" for this suite. May 12 11:30:57.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:30:58.098: INFO: namespace daemonsets-4138 deletion completed in 6.155293842s • [SLOW TEST:29.967 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:30:58.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 12 11:30:58.136: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:31:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2241" for this suite. May 12 11:31:35.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:31:35.872: INFO: namespace init-container-2241 deletion completed in 26.228232663s • [SLOW TEST:37.774 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:31:35.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:31:36.046: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:31:44.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1214" for this suite. May 12 11:32:34.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:32:34.772: INFO: namespace pods-1214 deletion completed in 50.572470149s • [SLOW TEST:58.900 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:32:34.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:32:34.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9" in namespace "projected-8642" to be "success or failure" May 12 11:32:34.859: INFO: Pod "downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.288428ms May 12 11:32:36.863: INFO: Pod "downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030748547s May 12 11:32:38.885: INFO: Pod "downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052615846s STEP: Saw pod success May 12 11:32:38.885: INFO: Pod "downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9" satisfied condition "success or failure" May 12 11:32:38.888: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9 container client-container: STEP: delete the pod May 12 11:32:38.952: INFO: Waiting for pod downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9 to disappear May 12 11:32:38.956: INFO: Pod downwardapi-volume-c1471e4a-d90f-4063-9a60-41e9ea9068e9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:32:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8642" for this suite. May 12 11:32:44.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:32:45.025: INFO: namespace projected-8642 deletion completed in 6.066234162s • [SLOW TEST:10.252 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:32:45.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:32:45.348: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 11:32:45.382: INFO: Number of nodes with available pods: 0 May 12 11:32:45.382: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 11:32:46.334: INFO: Number of nodes with available pods: 0 May 12 11:32:46.334: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:47.337: INFO: Number of nodes with available pods: 0 May 12 11:32:47.337: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:48.337: INFO: Number of nodes with available pods: 0 May 12 11:32:48.337: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:49.337: INFO: Number of nodes with available pods: 0 May 12 11:32:49.337: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:50.338: INFO: Number of nodes with available pods: 1 May 12 11:32:50.338: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 11:32:50.470: INFO: Number of nodes with available pods: 1 May 12 11:32:50.470: INFO: Number of running nodes: 0, number of available pods: 1 May 12 11:32:51.474: INFO: Number of nodes with available pods: 0 May 12 11:32:51.475: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 11:32:51.514: INFO: Number of nodes with available pods: 0 May 12 11:32:51.514: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:52.519: INFO: Number of nodes with available pods: 0 May 12 11:32:52.519: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:53.518: INFO: Number of nodes with available pods: 0 May 12 11:32:53.518: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:54.562: INFO: Number of nodes with available pods: 0 May 12 11:32:54.562: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:55.519: INFO: Number of nodes with available pods: 0 May 12 11:32:55.519: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:56.520: INFO: Number of nodes with available pods: 0 May 12 11:32:56.520: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:57.517: INFO: Number of nodes with available pods: 0 May 12 11:32:57.517: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:58.944: INFO: Number of nodes with available pods: 0 May 12 11:32:58.944: INFO: Node iruya-worker is running more than one daemon pod May 12 11:32:59.566: INFO: Number of nodes with available pods: 1 May 12 11:32:59.566: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8191, will wait for the garbage collector to delete the pods May 12 11:32:59.627: INFO: Deleting DaemonSet.extensions daemon-set took: 4.492185ms May 12 11:32:59.927: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.143546ms May 12 11:33:12.524: INFO: Number of nodes with available pods: 0 May 12 11:33:12.524: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:33:12.526: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8191/daemonsets","resourceVersion":"10467857"},"items":null} May 12 11:33:12.528: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8191/pods","resourceVersion":"10467857"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:33:12.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8191" for this suite. May 12 11:33:18.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:33:18.928: INFO: namespace daemonsets-8191 deletion completed in 6.311932671s • [SLOW TEST:33.903 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:33:18.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 11:33:19.085: INFO: Waiting up to 5m0s for pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782" in namespace "emptydir-3416" to be "success or failure" May 12 11:33:19.128: INFO: Pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782": Phase="Pending", Reason="", readiness=false. Elapsed: 42.861037ms May 12 11:33:21.174: INFO: Pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088782336s May 12 11:33:23.177: INFO: Pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782": Phase="Running", Reason="", readiness=true. Elapsed: 4.091855504s May 12 11:33:25.180: INFO: Pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095012794s STEP: Saw pod success May 12 11:33:25.180: INFO: Pod "pod-a72ed765-5ced-4c4a-8248-606bf6d0a782" satisfied condition "success or failure" May 12 11:33:25.183: INFO: Trying to get logs from node iruya-worker pod pod-a72ed765-5ced-4c4a-8248-606bf6d0a782 container test-container: STEP: delete the pod May 12 11:33:25.257: INFO: Waiting for pod pod-a72ed765-5ced-4c4a-8248-606bf6d0a782 to disappear May 12 11:33:25.269: INFO: Pod pod-a72ed765-5ced-4c4a-8248-606bf6d0a782 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:33:25.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3416" for this suite. May 12 11:33:31.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:33:31.434: INFO: namespace emptydir-3416 deletion completed in 6.163236451s • [SLOW TEST:12.506 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:33:31.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 12 11:33:37.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6364dd37-a074-4817-b4b3-fdeae98f53f4 -c busybox-main-container --namespace=emptydir-2722 -- cat /usr/share/volumeshare/shareddata.txt' May 12 11:33:43.428: INFO: stderr: "I0512 11:33:43.372291 2605 log.go:172] (0xc00073e2c0) (0xc000648b40) Create stream\nI0512 11:33:43.372317 2605 log.go:172] (0xc00073e2c0) (0xc000648b40) Stream added, broadcasting: 1\nI0512 11:33:43.374290 2605 log.go:172] (0xc00073e2c0) Reply frame received for 1\nI0512 11:33:43.374320 2605 log.go:172] (0xc00073e2c0) (0xc000648be0) Create stream\nI0512 11:33:43.374328 2605 log.go:172] (0xc00073e2c0) (0xc000648be0) Stream added, broadcasting: 3\nI0512 11:33:43.374944 2605 log.go:172] (0xc00073e2c0) Reply frame received for 3\nI0512 11:33:43.374978 2605 log.go:172] (0xc00073e2c0) (0xc0003d8000) Create stream\nI0512 11:33:43.374994 2605 log.go:172] (0xc00073e2c0) (0xc0003d8000) Stream added, broadcasting: 5\nI0512 11:33:43.375697 2605 log.go:172] (0xc00073e2c0) Reply frame received for 5\nI0512 11:33:43.423688 2605 log.go:172] (0xc00073e2c0) Data frame received for 5\nI0512 11:33:43.423726 2605 log.go:172] (0xc0003d8000) (5) Data frame handling\nI0512 11:33:43.423750 2605 log.go:172] (0xc00073e2c0) Data frame received for 3\nI0512 11:33:43.423763 2605 log.go:172] (0xc000648be0) (3) Data frame handling\nI0512 11:33:43.423778 2605 log.go:172] (0xc000648be0) (3) Data frame sent\nI0512 11:33:43.423791 2605 log.go:172] (0xc00073e2c0) Data frame received for 3\nI0512 11:33:43.423802 2605 log.go:172] (0xc000648be0) (3) Data frame handling\nI0512 11:33:43.425000 2605 log.go:172] (0xc00073e2c0) Data frame received for 1\nI0512 11:33:43.425018 2605 log.go:172] (0xc000648b40) (1) Data frame handling\nI0512 11:33:43.425049 2605 log.go:172] (0xc000648b40) (1) Data frame sent\nI0512 11:33:43.425077 2605 log.go:172] (0xc00073e2c0) (0xc000648b40) Stream removed, broadcasting: 1\nI0512 11:33:43.425096 2605 log.go:172] (0xc00073e2c0) Go away received\nI0512 11:33:43.425650 2605 log.go:172] (0xc00073e2c0) (0xc000648b40) Stream removed, broadcasting: 1\nI0512 11:33:43.425664 2605 log.go:172] (0xc00073e2c0) (0xc000648be0) Stream removed, broadcasting: 3\nI0512 11:33:43.425671 2605 log.go:172] (0xc00073e2c0) (0xc0003d8000) Stream removed, broadcasting: 5\n" May 12 11:33:43.429: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:33:43.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2722" for this suite. May 12 11:33:53.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:33:53.551: INFO: namespace emptydir-2722 deletion completed in 10.119948603s • [SLOW TEST:22.117 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:33:53.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:34:44.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5673" for this suite. May 12 11:34:52.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:34:53.012: INFO: namespace container-runtime-5673 deletion completed in 8.382914521s • [SLOW TEST:59.461 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:34:53.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-4t5g STEP: Creating a pod to test atomic-volume-subpath May 12 11:34:53.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4t5g" in namespace "subpath-9168" to be "success or failure" May 12 11:34:53.898: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Pending", Reason="", readiness=false. Elapsed: 55.52417ms May 12 11:34:55.902: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059782507s May 12 11:34:57.984: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141863902s May 12 11:35:00.035: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 6.193321117s May 12 11:35:02.038: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 8.195862968s May 12 11:35:04.065: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 10.222910811s May 12 11:35:06.095: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 12.253122221s May 12 11:35:08.099: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 14.257147187s May 12 11:35:10.293: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 16.45114604s May 12 11:35:12.296: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 18.454216674s May 12 11:35:14.301: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 20.458819145s May 12 11:35:16.958: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 23.115800886s May 12 11:35:18.962: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Running", Reason="", readiness=true. Elapsed: 25.119974435s May 12 11:35:20.966: INFO: Pod "pod-subpath-test-projected-4t5g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.123813369s STEP: Saw pod success May 12 11:35:20.966: INFO: Pod "pod-subpath-test-projected-4t5g" satisfied condition "success or failure" May 12 11:35:20.969: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-4t5g container test-container-subpath-projected-4t5g: STEP: delete the pod May 12 11:35:20.990: INFO: Waiting for pod pod-subpath-test-projected-4t5g to disappear May 12 11:35:20.995: INFO: Pod pod-subpath-test-projected-4t5g no longer exists STEP: Deleting pod pod-subpath-test-projected-4t5g May 12 11:35:20.995: INFO: Deleting pod "pod-subpath-test-projected-4t5g" in namespace "subpath-9168" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:35:20.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9168" for this suite. May 12 11:35:27.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:35:27.104: INFO: namespace subpath-9168 deletion completed in 6.104327115s • [SLOW TEST:34.091 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:35:27.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:35:27.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90" in namespace "downward-api-1620" to be "success or failure" May 12 11:35:27.175: INFO: Pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762824ms May 12 11:35:29.179: INFO: Pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007760384s May 12 11:35:31.191: INFO: Pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90": Phase="Running", Reason="", readiness=true. Elapsed: 4.019742381s May 12 11:35:33.287: INFO: Pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116159449s STEP: Saw pod success May 12 11:35:33.287: INFO: Pod "downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90" satisfied condition "success or failure" May 12 11:35:33.291: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90 container client-container: STEP: delete the pod May 12 11:35:33.534: INFO: Waiting for pod downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90 to disappear May 12 11:35:33.808: INFO: Pod downwardapi-volume-7eabee3c-e822-43d4-80ca-56098a2a6c90 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:35:33.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1620" for this suite. May 12 11:35:41.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:35:41.896: INFO: namespace downward-api-1620 deletion completed in 8.084590402s • [SLOW TEST:14.791 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:35:41.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e5ea447b-2c3b-40f9-904f-feb001cfd572 STEP: Creating a pod to test consume configMaps May 12 11:35:42.047: INFO: Waiting up to 5m0s for pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f" in namespace "configmap-8646" to be "success or failure" May 12 11:35:42.092: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.782288ms May 12 11:35:44.323: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276460392s May 12 11:35:46.571: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52390024s May 12 11:35:48.760: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713284091s May 12 11:35:51.120: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.073226109s May 12 11:35:53.124: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.077255188s STEP: Saw pod success May 12 11:35:53.124: INFO: Pod "pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f" satisfied condition "success or failure" May 12 11:35:53.127: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f container configmap-volume-test: STEP: delete the pod May 12 11:35:53.199: INFO: Waiting for pod pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f to disappear May 12 11:35:53.201: INFO: Pod pod-configmaps-b264fa51-6921-470b-86cf-4364024fef6f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:35:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8646" for this suite. May 12 11:36:01.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:36:01.405: INFO: namespace configmap-8646 deletion completed in 8.199781257s • [SLOW TEST:19.508 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:36:01.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-501d88d6-47d9-432b-8432-8ba92bf9164d STEP: Creating a pod to test consume secrets May 12 11:36:02.003: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080" in namespace "projected-9569" to be "success or failure" May 12 11:36:02.007: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.894223ms May 12 11:36:04.211: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208314146s May 12 11:36:06.492: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489029661s May 12 11:36:08.593: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080": Phase="Running", Reason="", readiness=true. Elapsed: 6.590635961s May 12 11:36:10.596: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.593844166s STEP: Saw pod success May 12 11:36:10.596: INFO: Pod "pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080" satisfied condition "success or failure" May 12 11:36:10.599: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080 container projected-secret-volume-test: STEP: delete the pod May 12 11:36:10.643: INFO: Waiting for pod pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080 to disappear May 12 11:36:10.654: INFO: Pod pod-projected-secrets-ca05e555-4673-48ca-805c-4942c7b04080 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:36:10.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9569" for this suite. May 12 11:36:18.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:36:18.816: INFO: namespace projected-9569 deletion completed in 8.15955132s • [SLOW TEST:17.411 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:36:18.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:36:19.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2457' May 12 11:36:19.305: INFO: stderr: "" May 12 11:36:19.305: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 12 11:36:24.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2457 -o json' May 12 11:36:24.451: INFO: stderr: "" May 12 11:36:24.451: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T11:36:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2457\",\n \"resourceVersion\": \"10468475\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2457/pods/e2e-test-nginx-pod\",\n \"uid\": \"f6ba5e6d-cb96-4d9f-89f6-b084f364c1ae\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lxxtd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lxxtd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lxxtd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:36:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:36:22Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:36:22Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T11:36:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://18eda1361a98dfe550580815ac78369763e09ee12577cf7c2d412f95a5abe40e\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T11:36:22Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.215\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T11:36:19Z\"\n }\n}\n" STEP: replace the image in the pod May 12 11:36:24.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2457' May 12 11:36:24.718: INFO: stderr: "" May 12 11:36:24.718: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 12 11:36:24.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2457' May 12 11:36:31.977: INFO: stderr: "" May 12 11:36:31.977: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:36:31.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2457" for this suite. May 12 11:36:38.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:36:38.499: INFO: namespace kubectl-2457 deletion completed in 6.24282507s • [SLOW TEST:19.682 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:36:38.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-c93cf9dc-5462-4491-9075-e5a461ed89fd STEP: Creating secret with name s-test-opt-upd-2bb3b322-f6d6-4783-a757-5e900a251d29 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c93cf9dc-5462-4491-9075-e5a461ed89fd STEP: Updating secret s-test-opt-upd-2bb3b322-f6d6-4783-a757-5e900a251d29 STEP: Creating secret with name s-test-opt-create-84cf4d96-d753-43cf-8732-ea2b595a7fd7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:36:53.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5389" for this suite. May 12 11:37:19.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:37:19.274: INFO: namespace secrets-5389 deletion completed in 26.08963853s • [SLOW TEST:40.775 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:37:19.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 11:37:19.725: INFO: Waiting up to 5m0s for pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115" in namespace "emptydir-3404" to be "success or failure" May 12 11:37:19.735: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115": Phase="Pending", Reason="", readiness=false. Elapsed: 9.338952ms May 12 11:37:22.055: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329759057s May 12 11:37:24.060: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334192575s May 12 11:37:26.373: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647733035s May 12 11:37:28.377: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.651785463s STEP: Saw pod success May 12 11:37:28.377: INFO: Pod "pod-e3deb8ce-4a99-4815-8600-4ae94dce1115" satisfied condition "success or failure" May 12 11:37:28.380: INFO: Trying to get logs from node iruya-worker2 pod pod-e3deb8ce-4a99-4815-8600-4ae94dce1115 container test-container: STEP: delete the pod May 12 11:37:28.618: INFO: Waiting for pod pod-e3deb8ce-4a99-4815-8600-4ae94dce1115 to disappear May 12 11:37:28.749: INFO: Pod pod-e3deb8ce-4a99-4815-8600-4ae94dce1115 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:37:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3404" for this suite. May 12 11:37:34.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:37:35.129: INFO: namespace emptydir-3404 deletion completed in 6.376145238s • [SLOW TEST:15.855 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:37:35.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 12 11:37:36.216: INFO: Waiting up to 5m0s for pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a" in namespace "downward-api-2611" to be "success or failure" May 12 11:37:36.275: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.667378ms May 12 11:37:39.044: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827431012s May 12 11:37:41.181: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.965058393s May 12 11:37:43.185: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.968458921s May 12 11:37:45.188: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.971521372s STEP: Saw pod success May 12 11:37:45.188: INFO: Pod "downward-api-d324cada-9f14-4eea-a361-4530713f305a" satisfied condition "success or failure" May 12 11:37:45.190: INFO: Trying to get logs from node iruya-worker pod downward-api-d324cada-9f14-4eea-a361-4530713f305a container dapi-container: STEP: delete the pod May 12 11:37:45.220: INFO: Waiting for pod downward-api-d324cada-9f14-4eea-a361-4530713f305a to disappear May 12 11:37:45.333: INFO: Pod downward-api-d324cada-9f14-4eea-a361-4530713f305a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:37:45.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2611" for this suite. May 12 11:37:51.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:37:51.546: INFO: namespace downward-api-2611 deletion completed in 6.210191662s • [SLOW TEST:16.416 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:37:51.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 11:38:01.634474 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:38:01.634: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:38:01.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9534" for this suite. May 12 11:38:09.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:38:09.891: INFO: namespace gc-9534 deletion completed in 8.252445814s • [SLOW TEST:18.344 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:38:09.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 12 11:38:19.070: INFO: Successfully updated pod "annotationupdate72d1fcf9-dbb6-45d1-b98c-d9e3fb97b240" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:38:21.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6597" for this suite. May 12 11:38:45.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:38:45.690: INFO: namespace downward-api-6597 deletion completed in 24.527751116s • [SLOW TEST:35.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:38:45.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 12 11:38:46.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-442 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 12 11:38:53.642: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0512 11:38:53.555927 2711 log.go:172] (0xc000a12160) (0xc0008de1e0) Create stream\nI0512 11:38:53.555994 2711 log.go:172] (0xc000a12160) (0xc0008de1e0) Stream added, broadcasting: 1\nI0512 11:38:53.559208 2711 log.go:172] (0xc000a12160) Reply frame received for 1\nI0512 11:38:53.559256 2711 log.go:172] (0xc000a12160) (0xc0003dc000) Create stream\nI0512 11:38:53.559268 2711 log.go:172] (0xc000a12160) (0xc0003dc000) Stream added, broadcasting: 3\nI0512 11:38:53.560435 2711 log.go:172] (0xc000a12160) Reply frame received for 3\nI0512 11:38:53.560481 2711 log.go:172] (0xc000a12160) (0xc0008de280) Create stream\nI0512 11:38:53.560495 2711 log.go:172] (0xc000a12160) (0xc0008de280) Stream added, broadcasting: 5\nI0512 11:38:53.562006 2711 log.go:172] (0xc000a12160) Reply frame received for 5\nI0512 11:38:53.562062 2711 log.go:172] (0xc000a12160) (0xc0003ea000) Create stream\nI0512 11:38:53.562092 2711 log.go:172] (0xc000a12160) (0xc0003ea000) Stream added, broadcasting: 7\nI0512 11:38:53.563224 2711 log.go:172] (0xc000a12160) Reply frame received for 7\nI0512 11:38:53.563325 2711 log.go:172] (0xc0003dc000) (3) Writing data frame\nI0512 11:38:53.563463 2711 log.go:172] (0xc0003dc000) (3) Writing data frame\nI0512 11:38:53.564404 2711 log.go:172] (0xc000a12160) Data frame received for 5\nI0512 11:38:53.564499 2711 log.go:172] (0xc0008de280) (5) Data frame handling\nI0512 11:38:53.564547 2711 log.go:172] (0xc0008de280) (5) Data frame sent\nI0512 11:38:53.565707 2711 log.go:172] (0xc000a12160) Data frame received for 5\nI0512 11:38:53.565735 2711 log.go:172] (0xc0008de280) (5) Data frame handling\nI0512 11:38:53.565756 2711 log.go:172] (0xc0008de280) (5) Data frame sent\nI0512 11:38:53.604607 2711 log.go:172] (0xc000a12160) Data frame received for 7\nI0512 11:38:53.604670 2711 log.go:172] (0xc0003ea000) (7) Data frame handling\nI0512 11:38:53.604752 2711 log.go:172] (0xc000a12160) Data frame received for 5\nI0512 11:38:53.604775 2711 log.go:172] (0xc0008de280) (5) Data frame handling\nI0512 11:38:53.605325 2711 log.go:172] (0xc000a12160) Data frame received for 1\nI0512 11:38:53.605373 2711 log.go:172] (0xc0008de1e0) (1) Data frame handling\nI0512 11:38:53.605425 2711 log.go:172] (0xc0008de1e0) (1) Data frame sent\nI0512 11:38:53.605697 2711 log.go:172] (0xc000a12160) (0xc0003dc000) Stream removed, broadcasting: 3\nI0512 11:38:53.605745 2711 log.go:172] (0xc000a12160) (0xc0008de1e0) Stream removed, broadcasting: 1\nI0512 11:38:53.605862 2711 log.go:172] (0xc000a12160) (0xc0008de1e0) Stream removed, broadcasting: 1\nI0512 11:38:53.605895 2711 log.go:172] (0xc000a12160) (0xc0003dc000) Stream removed, broadcasting: 3\nI0512 11:38:53.605912 2711 log.go:172] (0xc000a12160) (0xc0008de280) Stream removed, broadcasting: 5\nI0512 11:38:53.606041 2711 log.go:172] (0xc000a12160) Go away received\nI0512 11:38:53.606244 2711 log.go:172] (0xc000a12160) (0xc0003ea000) Stream removed, broadcasting: 7\n" May 12 11:38:53.642: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:38:55.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-442" for this suite. May 12 11:39:01.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:39:01.766: INFO: namespace kubectl-442 deletion completed in 6.098740977s • [SLOW TEST:16.075 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:39:01.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-90011314-880d-4721-91db-7a88ef072f9b STEP: Creating a pod to test consume secrets May 12 11:39:02.040: INFO: Waiting up to 5m0s for pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632" in namespace "secrets-1359" to be "success or failure" May 12 11:39:02.176: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Pending", Reason="", readiness=false. Elapsed: 136.632653ms May 12 11:39:04.578: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538499288s May 12 11:39:06.582: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542471806s May 12 11:39:08.775: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735661934s May 12 11:39:10.865: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825536105s May 12 11:39:12.869: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.829342571s STEP: Saw pod success May 12 11:39:12.869: INFO: Pod "pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632" satisfied condition "success or failure" May 12 11:39:12.871: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632 container secret-volume-test: STEP: delete the pod May 12 11:39:13.029: INFO: Waiting for pod pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632 to disappear May 12 11:39:13.115: INFO: Pod pod-secrets-d1b02c88-485b-4c65-bbd8-f0c575a17632 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:39:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1359" for this suite. May 12 11:39:19.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:39:19.770: INFO: namespace secrets-1359 deletion completed in 6.65028209s • [SLOW TEST:18.005 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:39:19.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:39:29.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2886" for this suite. May 12 11:39:38.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:39:38.231: INFO: namespace emptydir-wrapper-2886 deletion completed in 8.14245309s • [SLOW TEST:18.460 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:39:38.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 12 11:39:38.334: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 12 11:39:38.954: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 12 11:39:41.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880379, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:39:43.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880379, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724880378, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:39:46.273: INFO: Waited 622.614093ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:39:47.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7683" for this suite. May 12 11:39:55.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:39:55.759: INFO: namespace aggregator-7683 deletion completed in 8.696421153s • [SLOW TEST:17.528 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:39:55.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 11:39:57.061: INFO: Waiting up to 5m0s for pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611" in namespace "emptydir-1475" to be "success or failure" May 12 11:39:57.144: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Pending", Reason="", readiness=false. Elapsed: 82.616151ms May 12 11:39:59.147: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08615676s May 12 11:40:01.152: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09077574s May 12 11:40:03.156: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095323144s May 12 11:40:05.171: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110422744s May 12 11:40:07.174: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112940745s STEP: Saw pod success May 12 11:40:07.174: INFO: Pod "pod-b7e05261-b0f9-4787-bcc7-b6abf394a611" satisfied condition "success or failure" May 12 11:40:07.175: INFO: Trying to get logs from node iruya-worker2 pod pod-b7e05261-b0f9-4787-bcc7-b6abf394a611 container test-container: STEP: delete the pod May 12 11:40:07.382: INFO: Waiting for pod pod-b7e05261-b0f9-4787-bcc7-b6abf394a611 to disappear May 12 11:40:07.578: INFO: Pod pod-b7e05261-b0f9-4787-bcc7-b6abf394a611 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:40:07.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1475" for this suite. May 12 11:40:13.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:13.682: INFO: namespace emptydir-1475 deletion completed in 6.098395351s • [SLOW TEST:17.923 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:40:13.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:40:13.773: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 11:40:18.992: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 11:40:21.165: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 11:40:29.507: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6885,SelfLink:/apis/apps/v1/namespaces/deployment-6885/deployments/test-cleanup-deployment,UID:6cb95e71-51dc-4e67-952a-43f8db10d26c,ResourceVersion:10469346,Generation:1,CreationTimestamp:2020-05-12 11:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 11:40:21 +0000 UTC 2020-05-12 11:40:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 11:40:28 +0000 UTC 2020-05-12 11:40:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 11:40:29.509: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6885,SelfLink:/apis/apps/v1/namespaces/deployment-6885/replicasets/test-cleanup-deployment-55bbcbc84c,UID:2ed93a32-c67e-4071-b324-1f2289c688f7,ResourceVersion:10469333,Generation:1,CreationTimestamp:2020-05-12 11:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6cb95e71-51dc-4e67-952a-43f8db10d26c 0xc0037649d7 0xc0037649d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 11:40:29.511: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vf2lz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vf2lz,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6885,SelfLink:/api/v1/namespaces/deployment-6885/pods/test-cleanup-deployment-55bbcbc84c-vf2lz,UID:5ca4e776-2825-4cc4-9c79-cb9e5d0919bc,ResourceVersion:10469332,Generation:0,CreationTimestamp:2020-05-12 11:40:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 2ed93a32-c67e-4071-b324-1f2289c688f7 0xc003718c17 0xc003718c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tr6vt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tr6vt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-tr6vt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003718c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc003718cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:40:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:40:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:40:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:40:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.231,StartTime:2020-05-12 11:40:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 11:40:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://91aed61791a3ff11335bd5c453a0ceaa67dacfbfce0a7611e699071931d97089}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:40:29.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6885" for this suite. May 12 11:40:39.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:39.624: INFO: namespace deployment-6885 deletion completed in 10.110698526s • [SLOW TEST:25.939 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:40:39.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:40:41.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3" in namespace "projected-2758" to be "success or failure" May 12 11:40:41.688: INFO: Pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3": Phase="Pending", Reason="", readiness=false. Elapsed: 459.366375ms May 12 11:40:43.692: INFO: Pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464237725s May 12 11:40:45.697: INFO: Pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3": Phase="Running", Reason="", readiness=true. Elapsed: 4.4688866s May 12 11:40:47.700: INFO: Pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.472122062s STEP: Saw pod success May 12 11:40:47.700: INFO: Pod "downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3" satisfied condition "success or failure" May 12 11:40:47.704: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3 container client-container: STEP: delete the pod May 12 11:40:47.900: INFO: Waiting for pod downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3 to disappear May 12 11:40:48.046: INFO: Pod downwardapi-volume-964a2f8b-1345-44bd-8b44-395315f020d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:40:48.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2758" for this suite. May 12 11:40:54.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:40:54.169: INFO: namespace projected-2758 deletion completed in 6.120836348s • [SLOW TEST:14.545 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:40:54.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9542 I0512 11:40:54.315936 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9542, replica count: 1 I0512 11:40:55.366371 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:40:56.366603 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:40:57.366779 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:40:58.366952 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 11:40:59.367129 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 11:40:59.561: INFO: Created: latency-svc-s6snm May 12 11:40:59.565: INFO: Got endpoints: latency-svc-s6snm [98.083697ms] May 12 11:40:59.615: INFO: Created: latency-svc-q4ztg May 12 11:40:59.627: INFO: Got endpoints: latency-svc-q4ztg [61.479696ms] May 12 11:40:59.705: INFO: Created: latency-svc-wjb5r May 12 11:40:59.758: INFO: Got endpoints: latency-svc-wjb5r [193.190525ms] May 12 11:40:59.760: INFO: Created: latency-svc-t6rgz May 12 11:40:59.927: INFO: Got endpoints: latency-svc-t6rgz [362.34489ms] May 12 11:40:59.935: INFO: Created: latency-svc-sqnfg May 12 11:40:59.963: INFO: Got endpoints: latency-svc-sqnfg [397.922954ms] May 12 11:41:00.022: INFO: Created: latency-svc-bh7z5 May 12 11:41:00.070: INFO: Got endpoints: latency-svc-bh7z5 [504.934975ms] May 12 11:41:00.082: INFO: Created: latency-svc-crd7w May 12 11:41:00.095: INFO: Got endpoints: latency-svc-crd7w [529.841813ms] May 12 11:41:00.124: INFO: Created: latency-svc-mvl2g May 12 11:41:00.134: INFO: Got endpoints: latency-svc-mvl2g [568.909782ms] May 12 11:41:00.214: INFO: Created: latency-svc-2ts2l May 12 11:41:00.220: INFO: Got endpoints: latency-svc-2ts2l [655.242624ms] May 12 11:41:00.401: INFO: Created: latency-svc-l457r May 12 11:41:00.416: INFO: Got endpoints: latency-svc-l457r [851.379151ms] May 12 11:41:00.442: INFO: Created: latency-svc-hh5ml May 12 11:41:00.447: INFO: Got endpoints: latency-svc-hh5ml [881.456902ms] May 12 11:41:00.472: INFO: Created: latency-svc-j2rxk May 12 11:41:00.486: INFO: Got endpoints: latency-svc-j2rxk [920.977099ms] May 12 11:41:00.538: INFO: Created: latency-svc-lfz4q May 12 11:41:00.546: INFO: Got endpoints: latency-svc-lfz4q [981.145412ms] May 12 11:41:00.574: INFO: Created: latency-svc-x2pqp May 12 11:41:00.589: INFO: Got endpoints: latency-svc-x2pqp [1.023781613s] May 12 11:41:00.623: INFO: Created: latency-svc-c6nng May 12 11:41:00.637: INFO: Got endpoints: latency-svc-c6nng [1.07193939s] May 12 11:41:00.743: INFO: Created: latency-svc-hgrqs May 12 11:41:00.759: INFO: Got endpoints: latency-svc-hgrqs [1.194109336s] May 12 11:41:00.879: INFO: Created: latency-svc-xrd2m May 12 11:41:00.882: INFO: Got endpoints: latency-svc-xrd2m [1.255333686s] May 12 11:41:01.023: INFO: Created: latency-svc-lbwhx May 12 11:41:01.027: INFO: Got endpoints: latency-svc-lbwhx [1.268641996s] May 12 11:41:01.073: INFO: Created: latency-svc-k4v8k May 12 11:41:01.096: INFO: Got endpoints: latency-svc-k4v8k [1.169025158s] May 12 11:41:01.172: INFO: Created: latency-svc-wdw7p May 12 11:41:01.175: INFO: Got endpoints: latency-svc-wdw7p [1.211793187s] May 12 11:41:01.222: INFO: Created: latency-svc-8bh4s May 12 11:41:01.239: INFO: Got endpoints: latency-svc-8bh4s [1.169366791s] May 12 11:41:01.270: INFO: Created: latency-svc-kzmvw May 12 11:41:01.315: INFO: Got endpoints: latency-svc-kzmvw [1.220191247s] May 12 11:41:01.337: INFO: Created: latency-svc-5fb9z May 12 11:41:01.353: INFO: Got endpoints: latency-svc-5fb9z [1.21937172s] May 12 11:41:01.373: INFO: Created: latency-svc-g6mcl May 12 11:41:01.389: INFO: Got endpoints: latency-svc-g6mcl [1.16896379s] May 12 11:41:01.408: INFO: Created: latency-svc-wbrlm May 12 11:41:01.454: INFO: Got endpoints: latency-svc-wbrlm [1.037200117s] May 12 11:41:01.468: INFO: Created: latency-svc-xrpkl May 12 11:41:01.486: INFO: Got endpoints: latency-svc-xrpkl [1.039548538s] May 12 11:41:01.511: INFO: Created: latency-svc-dvwhm May 12 11:41:01.531: INFO: Got endpoints: latency-svc-dvwhm [1.045262111s] May 12 11:41:01.552: INFO: Created: latency-svc-lgdph May 12 11:41:01.633: INFO: Got endpoints: latency-svc-lgdph [1.086587635s] May 12 11:41:01.635: INFO: Created: latency-svc-fs726 May 12 11:41:01.638: INFO: Got endpoints: latency-svc-fs726 [1.049005382s] May 12 11:41:01.666: INFO: Created: latency-svc-kdc5s May 12 11:41:01.692: INFO: Got endpoints: latency-svc-kdc5s [1.05522818s] May 12 11:41:01.714: INFO: Created: latency-svc-9d7p2 May 12 11:41:01.728: INFO: Got endpoints: latency-svc-9d7p2 [968.961274ms] May 12 11:41:01.768: INFO: Created: latency-svc-jmqxr May 12 11:41:01.795: INFO: Got endpoints: latency-svc-jmqxr [912.606289ms] May 12 11:41:01.817: INFO: Created: latency-svc-hlw2s May 12 11:41:01.834: INFO: Got endpoints: latency-svc-hlw2s [807.206376ms] May 12 11:41:01.902: INFO: Created: latency-svc-mw9bn May 12 11:41:01.905: INFO: Got endpoints: latency-svc-mw9bn [808.310994ms] May 12 11:41:01.942: INFO: Created: latency-svc-r7bmb May 12 11:41:01.957: INFO: Got endpoints: latency-svc-r7bmb [782.443128ms] May 12 11:41:01.978: INFO: Created: latency-svc-l66g5 May 12 11:41:01.993: INFO: Got endpoints: latency-svc-l66g5 [754.041786ms] May 12 11:41:02.083: INFO: Created: latency-svc-f7wp2 May 12 11:41:02.107: INFO: Got endpoints: latency-svc-f7wp2 [791.868225ms] May 12 11:41:02.226: INFO: Created: latency-svc-cj8gw May 12 11:41:02.230: INFO: Got endpoints: latency-svc-cj8gw [876.178641ms] May 12 11:41:02.316: INFO: Created: latency-svc-r54cv May 12 11:41:02.447: INFO: Got endpoints: latency-svc-r54cv [1.057800284s] May 12 11:41:02.451: INFO: Created: latency-svc-nx7cv May 12 11:41:02.530: INFO: Got endpoints: latency-svc-nx7cv [1.076474695s] May 12 11:41:02.609: INFO: Created: latency-svc-thnj5 May 12 11:41:02.623: INFO: Got endpoints: latency-svc-thnj5 [1.136633015s] May 12 11:41:02.753: INFO: Created: latency-svc-9skm8 May 12 11:41:02.755: INFO: Got endpoints: latency-svc-9skm8 [1.223861787s] May 12 11:41:02.806: INFO: Created: latency-svc-whf8r May 12 11:41:02.821: INFO: Got endpoints: latency-svc-whf8r [1.188494149s] May 12 11:41:02.842: INFO: Created: latency-svc-gccj9 May 12 11:41:02.924: INFO: Got endpoints: latency-svc-gccj9 [1.285938481s] May 12 11:41:02.939: INFO: Created: latency-svc-5cmc2 May 12 11:41:02.954: INFO: Got endpoints: latency-svc-5cmc2 [1.261466775s] May 12 11:41:02.988: INFO: Created: latency-svc-t8pjb May 12 11:41:03.002: INFO: Got endpoints: latency-svc-t8pjb [1.273466787s] May 12 11:41:03.076: INFO: Created: latency-svc-84ghs May 12 11:41:03.079: INFO: Got endpoints: latency-svc-84ghs [1.283980545s] May 12 11:41:03.112: INFO: Created: latency-svc-p82md May 12 11:41:03.134: INFO: Got endpoints: latency-svc-p82md [1.300205385s] May 12 11:41:03.164: INFO: Created: latency-svc-w94hl May 12 11:41:03.170: INFO: Got endpoints: latency-svc-w94hl [1.265569373s] May 12 11:41:03.262: INFO: Created: latency-svc-rqfng May 12 11:41:03.270: INFO: Got endpoints: latency-svc-rqfng [1.313099144s] May 12 11:41:03.287: INFO: Created: latency-svc-kblnd May 12 11:41:03.328: INFO: Got endpoints: latency-svc-kblnd [1.334278135s] May 12 11:41:03.358: INFO: Created: latency-svc-slp6x May 12 11:41:03.399: INFO: Got endpoints: latency-svc-slp6x [1.291888215s] May 12 11:41:03.406: INFO: Created: latency-svc-w66lp May 12 11:41:03.425: INFO: Got endpoints: latency-svc-w66lp [1.195250422s] May 12 11:41:03.444: INFO: Created: latency-svc-tc42l May 12 11:41:03.454: INFO: Got endpoints: latency-svc-tc42l [1.007040617s] May 12 11:41:03.473: INFO: Created: latency-svc-qpk5r May 12 11:41:03.485: INFO: Got endpoints: latency-svc-qpk5r [954.746259ms] May 12 11:41:03.543: INFO: Created: latency-svc-d2776 May 12 11:41:03.546: INFO: Got endpoints: latency-svc-d2776 [922.690419ms] May 12 11:41:03.568: INFO: Created: latency-svc-x4pz6 May 12 11:41:03.582: INFO: Got endpoints: latency-svc-x4pz6 [826.347632ms] May 12 11:41:03.604: INFO: Created: latency-svc-99tcd May 12 11:41:03.617: INFO: Got endpoints: latency-svc-99tcd [795.808092ms] May 12 11:41:03.681: INFO: Created: latency-svc-qjmx2 May 12 11:41:03.683: INFO: Got endpoints: latency-svc-qjmx2 [759.429775ms] May 12 11:41:03.743: INFO: Created: latency-svc-q6skc May 12 11:41:03.775: INFO: Got endpoints: latency-svc-q6skc [820.721797ms] May 12 11:41:03.837: INFO: Created: latency-svc-zj2tk May 12 11:41:03.840: INFO: Got endpoints: latency-svc-zj2tk [838.557991ms] May 12 11:41:03.868: INFO: Created: latency-svc-f6s6l May 12 11:41:03.883: INFO: Got endpoints: latency-svc-f6s6l [804.304669ms] May 12 11:41:03.911: INFO: Created: latency-svc-42vzx May 12 11:41:03.919: INFO: Got endpoints: latency-svc-42vzx [784.504363ms] May 12 11:41:03.992: INFO: Created: latency-svc-bk6tg May 12 11:41:03.997: INFO: Got endpoints: latency-svc-bk6tg [826.481331ms] May 12 11:41:04.024: INFO: Created: latency-svc-6z9hn May 12 11:41:04.039: INFO: Got endpoints: latency-svc-6z9hn [768.914655ms] May 12 11:41:04.060: INFO: Created: latency-svc-bgnbp May 12 11:41:04.082: INFO: Got endpoints: latency-svc-bgnbp [754.288231ms] May 12 11:41:04.154: INFO: Created: latency-svc-5ww8j May 12 11:41:04.156: INFO: Got endpoints: latency-svc-5ww8j [756.931566ms] May 12 11:41:04.233: INFO: Created: latency-svc-jvlkn May 12 11:41:04.250: INFO: Got endpoints: latency-svc-jvlkn [825.080546ms] May 12 11:41:04.317: INFO: Created: latency-svc-h74jq May 12 11:41:04.322: INFO: Got endpoints: latency-svc-h74jq [867.532205ms] May 12 11:41:04.349: INFO: Created: latency-svc-xmgrh May 12 11:41:04.370: INFO: Got endpoints: latency-svc-xmgrh [885.074844ms] May 12 11:41:04.459: INFO: Created: latency-svc-pb8lg May 12 11:41:04.462: INFO: Got endpoints: latency-svc-pb8lg [916.716518ms] May 12 11:41:04.510: INFO: Created: latency-svc-2n9ps May 12 11:41:04.527: INFO: Got endpoints: latency-svc-2n9ps [945.378135ms] May 12 11:41:04.629: INFO: Created: latency-svc-jgqwq May 12 11:41:04.642: INFO: Got endpoints: latency-svc-jgqwq [1.024527761s] May 12 11:41:04.678: INFO: Created: latency-svc-82nmz May 12 11:41:04.695: INFO: Got endpoints: latency-svc-82nmz [1.01203042s] May 12 11:41:04.776: INFO: Created: latency-svc-45lps May 12 11:41:04.779: INFO: Got endpoints: latency-svc-45lps [1.003960256s] May 12 11:41:04.847: INFO: Created: latency-svc-n8sv9 May 12 11:41:04.857: INFO: Got endpoints: latency-svc-n8sv9 [1.016956903s] May 12 11:41:04.876: INFO: Created: latency-svc-tj8vx May 12 11:41:04.950: INFO: Got endpoints: latency-svc-tj8vx [1.066872772s] May 12 11:41:04.952: INFO: Created: latency-svc-8r8hj May 12 11:41:04.972: INFO: Got endpoints: latency-svc-8r8hj [1.052808643s] May 12 11:41:05.010: INFO: Created: latency-svc-7k7cq May 12 11:41:05.026: INFO: Got endpoints: latency-svc-7k7cq [1.029200559s] May 12 11:41:05.044: INFO: Created: latency-svc-vk7rt May 12 11:41:05.098: INFO: Got endpoints: latency-svc-vk7rt [1.05853373s] May 12 11:41:05.140: INFO: Created: latency-svc-fvhtg May 12 11:41:05.183: INFO: Got endpoints: latency-svc-fvhtg [1.100395815s] May 12 11:41:05.256: INFO: Created: latency-svc-xs92b May 12 11:41:05.285: INFO: Got endpoints: latency-svc-xs92b [1.128861261s] May 12 11:41:05.309: INFO: Created: latency-svc-k2522 May 12 11:41:05.333: INFO: Got endpoints: latency-svc-k2522 [1.082858723s] May 12 11:41:05.400: INFO: Created: latency-svc-jpt4k May 12 11:41:05.403: INFO: Got endpoints: latency-svc-jpt4k [1.081184039s] May 12 11:41:05.448: INFO: Created: latency-svc-4wbss May 12 11:41:05.460: INFO: Got endpoints: latency-svc-4wbss [1.089485097s] May 12 11:41:05.550: INFO: Created: latency-svc-mjczg May 12 11:41:05.556: INFO: Got endpoints: latency-svc-mjczg [1.093670503s] May 12 11:41:05.633: INFO: Created: latency-svc-vg25z May 12 11:41:05.687: INFO: Got endpoints: latency-svc-vg25z [1.159857828s] May 12 11:41:05.754: INFO: Created: latency-svc-jwxcn May 12 11:41:05.884: INFO: Got endpoints: latency-svc-jwxcn [1.242209737s] May 12 11:41:05.887: INFO: Created: latency-svc-ddwzv May 12 11:41:05.958: INFO: Got endpoints: latency-svc-ddwzv [1.262569639s] May 12 11:41:06.097: INFO: Created: latency-svc-pwl4n May 12 11:41:06.186: INFO: Got endpoints: latency-svc-pwl4n [1.407627352s] May 12 11:41:06.309: INFO: Created: latency-svc-h4k6s May 12 11:41:06.330: INFO: Got endpoints: latency-svc-h4k6s [1.472479404s] May 12 11:41:06.396: INFO: Created: latency-svc-qw9px May 12 11:41:06.483: INFO: Got endpoints: latency-svc-qw9px [1.533248023s] May 12 11:41:06.517: INFO: Created: latency-svc-766js May 12 11:41:06.547: INFO: Got endpoints: latency-svc-766js [1.574732844s] May 12 11:41:06.639: INFO: Created: latency-svc-sc6tv May 12 11:41:06.648: INFO: Got endpoints: latency-svc-sc6tv [1.6217869s] May 12 11:41:06.708: INFO: Created: latency-svc-dw5jp May 12 11:41:06.720: INFO: Got endpoints: latency-svc-dw5jp [1.622339712s] May 12 11:41:06.814: INFO: Created: latency-svc-c7nbg May 12 11:41:06.829: INFO: Got endpoints: latency-svc-c7nbg [1.646200712s] May 12 11:41:06.851: INFO: Created: latency-svc-nzb2p May 12 11:41:06.865: INFO: Got endpoints: latency-svc-nzb2p [1.580528172s] May 12 11:41:06.887: INFO: Created: latency-svc-kqfkj May 12 11:41:07.062: INFO: Got endpoints: latency-svc-kqfkj [1.728936508s] May 12 11:41:07.136: INFO: Created: latency-svc-dc2hc May 12 11:41:07.167: INFO: Got endpoints: latency-svc-dc2hc [1.763549018s] May 12 11:41:07.328: INFO: Created: latency-svc-jwpb6 May 12 11:41:07.335: INFO: Got endpoints: latency-svc-jwpb6 [1.874904306s] May 12 11:41:07.410: INFO: Created: latency-svc-p77l7 May 12 11:41:07.501: INFO: Got endpoints: latency-svc-p77l7 [1.945257704s] May 12 11:41:07.530: INFO: Created: latency-svc-k4dr9 May 12 11:41:07.545: INFO: Got endpoints: latency-svc-k4dr9 [1.858071923s] May 12 11:41:07.651: INFO: Created: latency-svc-2wnfv May 12 11:41:07.660: INFO: Got endpoints: latency-svc-2wnfv [1.776160584s] May 12 11:41:07.698: INFO: Created: latency-svc-rpnh8 May 12 11:41:07.708: INFO: Got endpoints: latency-svc-rpnh8 [1.749848454s] May 12 11:41:07.740: INFO: Created: latency-svc-7smp8 May 12 11:41:07.744: INFO: Got endpoints: latency-svc-7smp8 [1.557697705s] May 12 11:41:07.836: INFO: Created: latency-svc-j7hj6 May 12 11:41:07.852: INFO: Got endpoints: latency-svc-j7hj6 [1.522297726s] May 12 11:41:07.884: INFO: Created: latency-svc-c2snf May 12 11:41:07.907: INFO: Got endpoints: latency-svc-c2snf [1.423739112s] May 12 11:41:07.969: INFO: Created: latency-svc-mjlqt May 12 11:41:07.980: INFO: Got endpoints: latency-svc-mjlqt [1.433074275s] May 12 11:41:08.004: INFO: Created: latency-svc-g9shg May 12 11:41:08.023: INFO: Got endpoints: latency-svc-g9shg [1.374471215s] May 12 11:41:08.046: INFO: Created: latency-svc-px54s May 12 11:41:08.065: INFO: Got endpoints: latency-svc-px54s [1.344903319s] May 12 11:41:08.166: INFO: Created: latency-svc-bsn7q May 12 11:41:08.178: INFO: Got endpoints: latency-svc-bsn7q [1.349490135s] May 12 11:41:08.202: INFO: Created: latency-svc-qzqtq May 12 11:41:08.227: INFO: Got endpoints: latency-svc-qzqtq [1.361602882s] May 12 11:41:08.316: INFO: Created: latency-svc-gbfkk May 12 11:41:08.342: INFO: Got endpoints: latency-svc-gbfkk [1.279890697s] May 12 11:41:08.413: INFO: Created: latency-svc-lw8sh May 12 11:41:08.501: INFO: Got endpoints: latency-svc-lw8sh [1.334491004s] May 12 11:41:08.556: INFO: Created: latency-svc-wk855 May 12 11:41:08.564: INFO: Got endpoints: latency-svc-wk855 [1.229177388s] May 12 11:41:08.599: INFO: Created: latency-svc-xzcbg May 12 11:41:08.705: INFO: Got endpoints: latency-svc-xzcbg [1.203470594s] May 12 11:41:08.731: INFO: Created: latency-svc-xz2lh May 12 11:41:08.797: INFO: Got endpoints: latency-svc-xz2lh [1.251612228s] May 12 11:41:08.904: INFO: Created: latency-svc-qvk7l May 12 11:41:08.918: INFO: Got endpoints: latency-svc-qvk7l [1.257981478s] May 12 11:41:08.965: INFO: Created: latency-svc-jdgz2 May 12 11:41:09.022: INFO: Got endpoints: latency-svc-jdgz2 [1.314332478s] May 12 11:41:09.055: INFO: Created: latency-svc-lqxqf May 12 11:41:09.090: INFO: Got endpoints: latency-svc-lqxqf [1.345386133s] May 12 11:41:09.208: INFO: Created: latency-svc-phn79 May 12 11:41:09.282: INFO: Got endpoints: latency-svc-phn79 [1.429774327s] May 12 11:41:09.358: INFO: Created: latency-svc-hc42t May 12 11:41:09.370: INFO: Got endpoints: latency-svc-hc42t [1.462763808s] May 12 11:41:09.396: INFO: Created: latency-svc-l4v6k May 12 11:41:09.406: INFO: Got endpoints: latency-svc-l4v6k [1.426346006s] May 12 11:41:09.433: INFO: Created: latency-svc-wpmcd May 12 11:41:09.442: INFO: Got endpoints: latency-svc-wpmcd [1.419647162s] May 12 11:41:09.503: INFO: Created: latency-svc-72cgj May 12 11:41:09.527: INFO: Got endpoints: latency-svc-72cgj [1.461343494s] May 12 11:41:09.554: INFO: Created: latency-svc-fpmb9 May 12 11:41:09.569: INFO: Got endpoints: latency-svc-fpmb9 [1.390543733s] May 12 11:41:09.590: INFO: Created: latency-svc-xkjdq May 12 11:41:09.627: INFO: Got endpoints: latency-svc-xkjdq [1.399905451s] May 12 11:41:09.643: INFO: Created: latency-svc-8cvpn May 12 11:41:09.673: INFO: Got endpoints: latency-svc-8cvpn [1.330781588s] May 12 11:41:09.718: INFO: Created: latency-svc-hdcq6 May 12 11:41:09.784: INFO: Got endpoints: latency-svc-hdcq6 [1.282200689s] May 12 11:41:09.794: INFO: Created: latency-svc-s9dlc May 12 11:41:09.823: INFO: Got endpoints: latency-svc-s9dlc [1.259456132s] May 12 11:41:09.858: INFO: Created: latency-svc-qlmbn May 12 11:41:09.877: INFO: Got endpoints: latency-svc-qlmbn [1.172147028s] May 12 11:41:09.938: INFO: Created: latency-svc-5l4n9 May 12 11:41:09.961: INFO: Got endpoints: latency-svc-5l4n9 [1.164139834s] May 12 11:41:09.998: INFO: Created: latency-svc-scvlq May 12 11:41:10.016: INFO: Got endpoints: latency-svc-scvlq [1.097061449s] May 12 11:41:10.088: INFO: Created: latency-svc-tx9sp May 12 11:41:10.090: INFO: Got endpoints: latency-svc-tx9sp [1.068018291s] May 12 11:41:10.122: INFO: Created: latency-svc-k2t2v May 12 11:41:10.136: INFO: Got endpoints: latency-svc-k2t2v [1.046762457s] May 12 11:41:10.164: INFO: Created: latency-svc-ckrgq May 12 11:41:10.173: INFO: Got endpoints: latency-svc-ckrgq [890.577721ms] May 12 11:41:10.233: INFO: Created: latency-svc-rzhbr May 12 11:41:10.245: INFO: Got endpoints: latency-svc-rzhbr [875.388699ms] May 12 11:41:10.273: INFO: Created: latency-svc-fd4m9 May 12 11:41:10.288: INFO: Got endpoints: latency-svc-fd4m9 [881.52039ms] May 12 11:41:10.315: INFO: Created: latency-svc-jb2gv May 12 11:41:10.330: INFO: Got endpoints: latency-svc-jb2gv [887.790827ms] May 12 11:41:10.381: INFO: Created: latency-svc-mv474 May 12 11:41:10.390: INFO: Got endpoints: latency-svc-mv474 [863.622965ms] May 12 11:41:10.416: INFO: Created: latency-svc-7tmwg May 12 11:41:10.445: INFO: Got endpoints: latency-svc-7tmwg [875.684891ms] May 12 11:41:10.556: INFO: Created: latency-svc-wbm2v May 12 11:41:10.558: INFO: Got endpoints: latency-svc-wbm2v [931.005685ms] May 12 11:41:10.596: INFO: Created: latency-svc-s49zt May 12 11:41:10.626: INFO: Got endpoints: latency-svc-s49zt [953.527602ms] May 12 11:41:10.699: INFO: Created: latency-svc-xlkd5 May 12 11:41:10.710: INFO: Got endpoints: latency-svc-xlkd5 [925.937606ms] May 12 11:41:10.753: INFO: Created: latency-svc-bppvv May 12 11:41:10.776: INFO: Got endpoints: latency-svc-bppvv [952.410013ms] May 12 11:41:10.855: INFO: Created: latency-svc-fz667 May 12 11:41:10.867: INFO: Got endpoints: latency-svc-fz667 [989.981872ms] May 12 11:41:10.915: INFO: Created: latency-svc-26n59 May 12 11:41:10.927: INFO: Got endpoints: latency-svc-26n59 [966.199756ms] May 12 11:41:10.951: INFO: Created: latency-svc-wpgn7 May 12 11:41:11.004: INFO: Got endpoints: latency-svc-wpgn7 [988.738704ms] May 12 11:41:11.016: INFO: Created: latency-svc-wvcg2 May 12 11:41:11.064: INFO: Got endpoints: latency-svc-wvcg2 [973.910313ms] May 12 11:41:11.166: INFO: Created: latency-svc-q7txj May 12 11:41:11.169: INFO: Got endpoints: latency-svc-q7txj [1.032111024s] May 12 11:41:11.220: INFO: Created: latency-svc-4txxt May 12 11:41:11.241: INFO: Got endpoints: latency-svc-4txxt [1.067913561s] May 12 11:41:11.298: INFO: Created: latency-svc-jnl6z May 12 11:41:11.307: INFO: Got endpoints: latency-svc-jnl6z [1.061346018s] May 12 11:41:11.329: INFO: Created: latency-svc-4fz29 May 12 11:41:11.343: INFO: Got endpoints: latency-svc-4fz29 [1.055638072s] May 12 11:41:11.365: INFO: Created: latency-svc-964b8 May 12 11:41:11.380: INFO: Got endpoints: latency-svc-964b8 [1.049365868s] May 12 11:41:11.447: INFO: Created: latency-svc-mchjx May 12 11:41:11.451: INFO: Got endpoints: latency-svc-mchjx [1.060509846s] May 12 11:41:11.508: INFO: Created: latency-svc-88qfz May 12 11:41:11.532: INFO: Got endpoints: latency-svc-88qfz [1.087532065s] May 12 11:41:11.663: INFO: Created: latency-svc-spnlc May 12 11:41:11.665: INFO: Got endpoints: latency-svc-spnlc [1.107130093s] May 12 11:41:11.731: INFO: Created: latency-svc-wkh9j May 12 11:41:11.891: INFO: Got endpoints: latency-svc-wkh9j [1.264248233s] May 12 11:41:11.892: INFO: Created: latency-svc-nlbd4 May 12 11:41:11.922: INFO: Got endpoints: latency-svc-nlbd4 [1.211991836s] May 12 11:41:11.946: INFO: Created: latency-svc-7s9cx May 12 11:41:11.963: INFO: Got endpoints: latency-svc-7s9cx [1.187423365s] May 12 11:41:12.119: INFO: Created: latency-svc-w7p5z May 12 11:41:12.122: INFO: Got endpoints: latency-svc-w7p5z [1.254637827s] May 12 11:41:12.163: INFO: Created: latency-svc-ns4vg May 12 11:41:12.192: INFO: Got endpoints: latency-svc-ns4vg [1.264893297s] May 12 11:41:12.298: INFO: Created: latency-svc-lbmhr May 12 11:41:12.385: INFO: Got endpoints: latency-svc-lbmhr [1.380875486s] May 12 11:41:12.389: INFO: Created: latency-svc-8w5k6 May 12 11:41:12.485: INFO: Got endpoints: latency-svc-8w5k6 [1.420050125s] May 12 11:41:12.547: INFO: Created: latency-svc-d7cd4 May 12 11:41:12.565: INFO: Got endpoints: latency-svc-d7cd4 [1.395938484s] May 12 11:41:12.731: INFO: Created: latency-svc-kz992 May 12 11:41:12.740: INFO: Got endpoints: latency-svc-kz992 [1.499285765s] May 12 11:41:12.902: INFO: Created: latency-svc-g4mzb May 12 11:41:12.926: INFO: Got endpoints: latency-svc-g4mzb [1.619089967s] May 12 11:41:12.985: INFO: Created: latency-svc-6bqg6 May 12 11:41:13.082: INFO: Got endpoints: latency-svc-6bqg6 [1.738542233s] May 12 11:41:13.135: INFO: Created: latency-svc-tlhxv May 12 11:41:13.178: INFO: Got endpoints: latency-svc-tlhxv [1.798241215s] May 12 11:41:13.334: INFO: Created: latency-svc-ln6wt May 12 11:41:13.358: INFO: Got endpoints: latency-svc-ln6wt [1.907484294s] May 12 11:41:13.424: INFO: Created: latency-svc-2s9zz May 12 11:41:13.442: INFO: Got endpoints: latency-svc-2s9zz [1.909877916s] May 12 11:41:13.508: INFO: Created: latency-svc-2mgh7 May 12 11:41:13.574: INFO: Got endpoints: latency-svc-2mgh7 [1.908570597s] May 12 11:41:13.640: INFO: Created: latency-svc-lcktk May 12 11:41:13.659: INFO: Got endpoints: latency-svc-lcktk [1.768226707s] May 12 11:41:13.724: INFO: Created: latency-svc-j4sjq May 12 11:41:13.773: INFO: Got endpoints: latency-svc-j4sjq [1.851807983s] May 12 11:41:13.891: INFO: Created: latency-svc-kdc4n May 12 11:41:13.895: INFO: Got endpoints: latency-svc-kdc4n [1.931046328s] May 12 11:41:13.953: INFO: Created: latency-svc-m2mvt May 12 11:41:13.978: INFO: Got endpoints: latency-svc-m2mvt [1.856170305s] May 12 11:41:14.043: INFO: Created: latency-svc-5hxqx May 12 11:41:14.056: INFO: Got endpoints: latency-svc-5hxqx [1.86401955s] May 12 11:41:14.083: INFO: Created: latency-svc-cr5fg May 12 11:41:14.092: INFO: Got endpoints: latency-svc-cr5fg [1.706700359s] May 12 11:41:14.148: INFO: Created: latency-svc-sj49l May 12 11:41:14.174: INFO: Got endpoints: latency-svc-sj49l [1.68968828s] May 12 11:41:14.204: INFO: Created: latency-svc-fp79b May 12 11:41:14.224: INFO: Got endpoints: latency-svc-fp79b [1.659019119s] May 12 11:41:14.280: INFO: Created: latency-svc-6pkjw May 12 11:41:14.284: INFO: Got endpoints: latency-svc-6pkjw [1.544192602s] May 12 11:41:14.317: INFO: Created: latency-svc-w4cxj May 12 11:41:14.332: INFO: Got endpoints: latency-svc-w4cxj [1.406348408s] May 12 11:41:14.354: INFO: Created: latency-svc-xmrqx May 12 11:41:14.424: INFO: Got endpoints: latency-svc-xmrqx [1.342209145s] May 12 11:41:14.438: INFO: Created: latency-svc-l9tnk May 12 11:41:14.447: INFO: Got endpoints: latency-svc-l9tnk [1.269045878s] May 12 11:41:14.481: INFO: Created: latency-svc-6s79h May 12 11:41:14.495: INFO: Got endpoints: latency-svc-6s79h [1.136959443s] May 12 11:41:14.521: INFO: Created: latency-svc-jsd7k May 12 11:41:14.579: INFO: Got endpoints: latency-svc-jsd7k [1.136611133s] May 12 11:41:14.606: INFO: Created: latency-svc-jw926 May 12 11:41:14.623: INFO: Got endpoints: latency-svc-jw926 [1.04859088s] May 12 11:41:14.655: INFO: Created: latency-svc-f5jl4 May 12 11:41:14.716: INFO: Got endpoints: latency-svc-f5jl4 [1.05733732s] May 12 11:41:14.743: INFO: Created: latency-svc-xvhdh May 12 11:41:14.779: INFO: Got endpoints: latency-svc-xvhdh [1.00564127s] May 12 11:41:14.886: INFO: Created: latency-svc-rhqlh May 12 11:41:14.930: INFO: Got endpoints: latency-svc-rhqlh [1.035736964s] May 12 11:41:15.071: INFO: Created: latency-svc-r7b9f May 12 11:41:15.091: INFO: Got endpoints: latency-svc-r7b9f [1.112925853s] May 12 11:41:15.171: INFO: Created: latency-svc-grd6l May 12 11:41:15.215: INFO: Got endpoints: latency-svc-grd6l [1.158789668s] May 12 11:41:15.242: INFO: Created: latency-svc-jmwpv May 12 11:41:15.260: INFO: Got endpoints: latency-svc-jmwpv [1.168383411s] May 12 11:41:15.284: INFO: Created: latency-svc-6fskj May 12 11:41:15.303: INFO: Got endpoints: latency-svc-6fskj [1.128266479s] May 12 11:41:15.406: INFO: Created: latency-svc-x6xvp May 12 11:41:15.423: INFO: Got endpoints: latency-svc-x6xvp [1.199219659s] May 12 11:41:15.458: INFO: Created: latency-svc-lzp4z May 12 11:41:15.500: INFO: Got endpoints: latency-svc-lzp4z [1.215755409s] May 12 11:41:15.555: INFO: Created: latency-svc-spzts May 12 11:41:15.584: INFO: Got endpoints: latency-svc-spzts [1.251689277s] May 12 11:41:15.584: INFO: Created: latency-svc-zwqbp May 12 11:41:15.598: INFO: Got endpoints: latency-svc-zwqbp [1.173783524s] May 12 11:41:15.620: INFO: Created: latency-svc-2spwv May 12 11:41:15.641: INFO: Got endpoints: latency-svc-2spwv [1.193917555s] May 12 11:41:15.699: INFO: Created: latency-svc-hvr6v May 12 11:41:15.719: INFO: Got endpoints: latency-svc-hvr6v [1.223540022s] May 12 11:41:15.770: INFO: Created: latency-svc-hjsh4 May 12 11:41:15.786: INFO: Got endpoints: latency-svc-hjsh4 [1.206684726s] May 12 11:41:15.786: INFO: Latencies: [61.479696ms 193.190525ms 362.34489ms 397.922954ms 504.934975ms 529.841813ms 568.909782ms 655.242624ms 754.041786ms 754.288231ms 756.931566ms 759.429775ms 768.914655ms 782.443128ms 784.504363ms 791.868225ms 795.808092ms 804.304669ms 807.206376ms 808.310994ms 820.721797ms 825.080546ms 826.347632ms 826.481331ms 838.557991ms 851.379151ms 863.622965ms 867.532205ms 875.388699ms 875.684891ms 876.178641ms 881.456902ms 881.52039ms 885.074844ms 887.790827ms 890.577721ms 912.606289ms 916.716518ms 920.977099ms 922.690419ms 925.937606ms 931.005685ms 945.378135ms 952.410013ms 953.527602ms 954.746259ms 966.199756ms 968.961274ms 973.910313ms 981.145412ms 988.738704ms 989.981872ms 1.003960256s 1.00564127s 1.007040617s 1.01203042s 1.016956903s 1.023781613s 1.024527761s 1.029200559s 1.032111024s 1.035736964s 1.037200117s 1.039548538s 1.045262111s 1.046762457s 1.04859088s 1.049005382s 1.049365868s 1.052808643s 1.05522818s 1.055638072s 1.05733732s 1.057800284s 1.05853373s 1.060509846s 1.061346018s 1.066872772s 1.067913561s 1.068018291s 1.07193939s 1.076474695s 1.081184039s 1.082858723s 1.086587635s 1.087532065s 1.089485097s 1.093670503s 1.097061449s 1.100395815s 1.107130093s 1.112925853s 1.128266479s 1.128861261s 1.136611133s 1.136633015s 1.136959443s 1.158789668s 1.159857828s 1.164139834s 1.168383411s 1.16896379s 1.169025158s 1.169366791s 1.172147028s 1.173783524s 1.187423365s 1.188494149s 1.193917555s 1.194109336s 1.195250422s 1.199219659s 1.203470594s 1.206684726s 1.211793187s 1.211991836s 1.215755409s 1.21937172s 1.220191247s 1.223540022s 1.223861787s 1.229177388s 1.242209737s 1.251612228s 1.251689277s 1.254637827s 1.255333686s 1.257981478s 1.259456132s 1.261466775s 1.262569639s 1.264248233s 1.264893297s 1.265569373s 1.268641996s 1.269045878s 1.273466787s 1.279890697s 1.282200689s 1.283980545s 1.285938481s 1.291888215s 1.300205385s 1.313099144s 1.314332478s 1.330781588s 1.334278135s 1.334491004s 1.342209145s 1.344903319s 1.345386133s 1.349490135s 1.361602882s 1.374471215s 1.380875486s 1.390543733s 1.395938484s 1.399905451s 1.406348408s 1.407627352s 1.419647162s 1.420050125s 1.423739112s 1.426346006s 1.429774327s 1.433074275s 1.461343494s 1.462763808s 1.472479404s 1.499285765s 1.522297726s 1.533248023s 1.544192602s 1.557697705s 1.574732844s 1.580528172s 1.619089967s 1.6217869s 1.622339712s 1.646200712s 1.659019119s 1.68968828s 1.706700359s 1.728936508s 1.738542233s 1.749848454s 1.763549018s 1.768226707s 1.776160584s 1.798241215s 1.851807983s 1.856170305s 1.858071923s 1.86401955s 1.874904306s 1.907484294s 1.908570597s 1.909877916s 1.931046328s 1.945257704s] May 12 11:41:15.786: INFO: 50 %ile: 1.168383411s May 12 11:41:15.786: INFO: 90 %ile: 1.659019119s May 12 11:41:15.786: INFO: 99 %ile: 1.931046328s May 12 11:41:15.786: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:41:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9542" for this suite. May 12 11:41:47.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:41:47.910: INFO: namespace svc-latency-9542 deletion completed in 32.11831143s • [SLOW TEST:53.740 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:41:47.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 12 11:41:47.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1678' May 12 11:41:48.269: INFO: stderr: "" May 12 11:41:48.269: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 11:41:49.274: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:49.274: INFO: Found 0 / 1 May 12 11:41:50.670: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:50.670: INFO: Found 0 / 1 May 12 11:41:51.418: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:51.418: INFO: Found 0 / 1 May 12 11:41:52.275: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:52.275: INFO: Found 0 / 1 May 12 11:41:53.274: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:53.274: INFO: Found 0 / 1 May 12 11:41:54.273: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:54.273: INFO: Found 1 / 1 May 12 11:41:54.273: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 11:41:54.275: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:54.275: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 11:41:54.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-gkqb7 --namespace=kubectl-1678 -p {"metadata":{"annotations":{"x":"y"}}}' May 12 11:41:54.372: INFO: stderr: "" May 12 11:41:54.372: INFO: stdout: "pod/redis-master-gkqb7 patched\n" STEP: checking annotations May 12 11:41:54.424: INFO: Selector matched 1 pods for map[app:redis] May 12 11:41:54.424: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:41:54.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1678" for this suite. May 12 11:42:16.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:42:16.522: INFO: namespace kubectl-1678 deletion completed in 22.094616568s • [SLOW TEST:28.612 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:42:16.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3255 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:42:16.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:42:46.938: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3255 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:42:46.938: INFO: >>> kubeConfig: /root/.kube/config I0512 11:42:46.961970 6 log.go:172] (0xc000433970) (0xc00352c3c0) Create stream I0512 11:42:46.962005 6 log.go:172] (0xc000433970) (0xc00352c3c0) Stream added, broadcasting: 1 I0512 11:42:46.970581 6 log.go:172] (0xc000433970) Reply frame received for 1 I0512 11:42:46.970627 6 log.go:172] (0xc000433970) (0xc0013be6e0) Create stream I0512 11:42:46.970638 6 log.go:172] (0xc000433970) (0xc0013be6e0) Stream added, broadcasting: 3 I0512 11:42:46.971667 6 log.go:172] (0xc000433970) Reply frame received for 3 I0512 11:42:46.971703 6 log.go:172] (0xc000433970) (0xc0013be820) Create stream I0512 11:42:46.971718 6 log.go:172] (0xc000433970) (0xc0013be820) Stream added, broadcasting: 5 I0512 11:42:46.972706 6 log.go:172] (0xc000433970) Reply frame received for 5 I0512 11:42:48.060838 6 log.go:172] (0xc000433970) Data frame received for 3 I0512 11:42:48.060877 6 log.go:172] (0xc0013be6e0) (3) Data frame handling I0512 11:42:48.060889 6 log.go:172] (0xc0013be6e0) (3) Data frame sent I0512 11:42:48.060901 6 log.go:172] (0xc000433970) Data frame received for 3 I0512 11:42:48.060914 6 log.go:172] (0xc0013be6e0) (3) Data frame handling I0512 11:42:48.060937 6 log.go:172] (0xc000433970) Data frame received for 5 I0512 11:42:48.060963 6 log.go:172] (0xc0013be820) (5) Data frame handling I0512 11:42:48.063284 6 log.go:172] (0xc000433970) Data frame received for 1 I0512 11:42:48.063317 6 log.go:172] (0xc00352c3c0) (1) Data frame handling I0512 11:42:48.063348 6 log.go:172] (0xc00352c3c0) (1) Data frame sent I0512 11:42:48.063380 6 log.go:172] (0xc000433970) (0xc00352c3c0) Stream removed, broadcasting: 1 I0512 11:42:48.063525 6 log.go:172] (0xc000433970) (0xc00352c3c0) Stream removed, broadcasting: 1 I0512 11:42:48.063559 6 log.go:172] (0xc000433970) (0xc0013be6e0) Stream removed, broadcasting: 3 I0512 11:42:48.063579 6 log.go:172] (0xc000433970) (0xc0013be820) Stream removed, broadcasting: 5 May 12 11:42:48.063: INFO: Found all expected endpoints: [netserver-0] I0512 11:42:48.063828 6 log.go:172] (0xc000433970) Go away received May 12 11:42:48.067: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.226 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3255 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:42:48.067: INFO: >>> kubeConfig: /root/.kube/config I0512 11:42:48.098710 6 log.go:172] (0xc000ac76b0) (0xc00352c640) Create stream I0512 11:42:48.098739 6 log.go:172] (0xc000ac76b0) (0xc00352c640) Stream added, broadcasting: 1 I0512 11:42:48.106467 6 log.go:172] (0xc000ac76b0) Reply frame received for 1 I0512 11:42:48.106543 6 log.go:172] (0xc000ac76b0) (0xc0013beaa0) Create stream I0512 11:42:48.106559 6 log.go:172] (0xc000ac76b0) (0xc0013beaa0) Stream added, broadcasting: 3 I0512 11:42:48.108728 6 log.go:172] (0xc000ac76b0) Reply frame received for 3 I0512 11:42:48.108765 6 log.go:172] (0xc000ac76b0) (0xc0033381e0) Create stream I0512 11:42:48.108780 6 log.go:172] (0xc000ac76b0) (0xc0033381e0) Stream added, broadcasting: 5 I0512 11:42:48.110056 6 log.go:172] (0xc000ac76b0) Reply frame received for 5 I0512 11:42:49.162215 6 log.go:172] (0xc000ac76b0) Data frame received for 3 I0512 11:42:49.162248 6 log.go:172] (0xc0013beaa0) (3) Data frame handling I0512 11:42:49.162261 6 log.go:172] (0xc0013beaa0) (3) Data frame sent I0512 11:42:49.162271 6 log.go:172] (0xc000ac76b0) Data frame received for 3 I0512 11:42:49.162280 6 log.go:172] (0xc0013beaa0) (3) Data frame handling I0512 11:42:49.162302 6 log.go:172] (0xc000ac76b0) Data frame received for 5 I0512 11:42:49.162323 6 log.go:172] (0xc0033381e0) (5) Data frame handling I0512 11:42:49.163918 6 log.go:172] (0xc000ac76b0) Data frame received for 1 I0512 11:42:49.163943 6 log.go:172] (0xc00352c640) (1) Data frame handling I0512 11:42:49.163953 6 log.go:172] (0xc00352c640) (1) Data frame sent I0512 11:42:49.163963 6 log.go:172] (0xc000ac76b0) (0xc00352c640) Stream removed, broadcasting: 1 I0512 11:42:49.164039 6 log.go:172] (0xc000ac76b0) Go away received I0512 11:42:49.164127 6 log.go:172] (0xc000ac76b0) (0xc00352c640) Stream removed, broadcasting: 1 I0512 11:42:49.164218 6 log.go:172] (0xc000ac76b0) (0xc0013beaa0) Stream removed, broadcasting: 3 I0512 11:42:49.164252 6 log.go:172] (0xc000ac76b0) (0xc0033381e0) Stream removed, broadcasting: 5 May 12 11:42:49.164: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:42:49.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3255" for this suite. May 12 11:43:13.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:43:13.357: INFO: namespace pod-network-test-3255 deletion completed in 24.164753478s • [SLOW TEST:56.833 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:43:13.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3019 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3019 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3019 May 12 11:43:14.090: INFO: Found 0 stateful pods, waiting for 1 May 12 11:43:24.095: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 11:43:24.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 11:43:24.366: INFO: stderr: "I0512 11:43:24.234483 2777 log.go:172] (0xc000116fd0) (0xc00068aa00) Create stream\nI0512 11:43:24.234549 2777 log.go:172] (0xc000116fd0) (0xc00068aa00) Stream added, broadcasting: 1\nI0512 11:43:24.236870 2777 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0512 11:43:24.236934 2777 log.go:172] (0xc000116fd0) (0xc000958000) Create stream\nI0512 11:43:24.236963 2777 log.go:172] (0xc000116fd0) (0xc000958000) Stream added, broadcasting: 3\nI0512 11:43:24.238232 2777 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0512 11:43:24.238315 2777 log.go:172] (0xc000116fd0) (0xc00068aaa0) Create stream\nI0512 11:43:24.238344 2777 log.go:172] (0xc000116fd0) (0xc00068aaa0) Stream added, broadcasting: 5\nI0512 11:43:24.239376 2777 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0512 11:43:24.331500 2777 log.go:172] (0xc000116fd0) Data frame received for 5\nI0512 11:43:24.331523 2777 log.go:172] (0xc00068aaa0) (5) Data frame handling\nI0512 11:43:24.331539 2777 log.go:172] (0xc00068aaa0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 11:43:24.358954 2777 log.go:172] (0xc000116fd0) Data frame received for 3\nI0512 11:43:24.358993 2777 log.go:172] (0xc000958000) (3) Data frame handling\nI0512 11:43:24.359025 2777 log.go:172] (0xc000958000) (3) Data frame sent\nI0512 11:43:24.359164 2777 log.go:172] (0xc000116fd0) Data frame received for 5\nI0512 11:43:24.359178 2777 log.go:172] (0xc00068aaa0) (5) Data frame handling\nI0512 11:43:24.359192 2777 log.go:172] (0xc000116fd0) Data frame received for 3\nI0512 11:43:24.359200 2777 log.go:172] (0xc000958000) (3) Data frame handling\nI0512 11:43:24.360780 2777 log.go:172] (0xc000116fd0) Data frame received for 1\nI0512 11:43:24.360801 2777 log.go:172] (0xc00068aa00) (1) Data frame handling\nI0512 11:43:24.360833 2777 log.go:172] (0xc00068aa00) (1) Data frame sent\nI0512 11:43:24.362302 2777 log.go:172] (0xc000116fd0) (0xc00068aa00) Stream removed, broadcasting: 1\nI0512 11:43:24.362363 2777 log.go:172] (0xc000116fd0) Go away received\nI0512 11:43:24.363072 2777 log.go:172] (0xc000116fd0) (0xc00068aa00) Stream removed, broadcasting: 1\nI0512 11:43:24.363090 2777 log.go:172] (0xc000116fd0) (0xc000958000) Stream removed, broadcasting: 3\nI0512 11:43:24.363098 2777 log.go:172] (0xc000116fd0) (0xc00068aaa0) Stream removed, broadcasting: 5\n" May 12 11:43:24.366: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 11:43:24.366: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 11:43:24.370: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 11:43:34.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 11:43:34.408: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:43:34.460: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:43:34.460: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:43:34.460: INFO: May 12 11:43:34.460: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 11:43:35.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.957398162s May 12 11:43:36.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.849271027s May 12 11:43:38.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.476957517s May 12 11:43:39.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.150696818s May 12 11:43:40.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.146274136s May 12 11:43:41.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.007901121s May 12 11:43:42.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.003733332s May 12 11:43:43.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 999.318891ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3019 May 12 11:43:44.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 11:43:51.251: INFO: stderr: "I0512 11:43:51.162131 2798 log.go:172] (0xc000778f20) (0xc0005bb5e0) Create stream\nI0512 11:43:51.162169 2798 log.go:172] (0xc000778f20) (0xc0005bb5e0) Stream added, broadcasting: 1\nI0512 11:43:51.165457 2798 log.go:172] (0xc000778f20) Reply frame received for 1\nI0512 11:43:51.165491 2798 log.go:172] (0xc000778f20) (0xc000287c20) Create stream\nI0512 11:43:51.165501 2798 log.go:172] (0xc000778f20) (0xc000287c20) Stream added, broadcasting: 3\nI0512 11:43:51.166477 2798 log.go:172] (0xc000778f20) Reply frame received for 3\nI0512 11:43:51.166524 2798 log.go:172] (0xc000778f20) (0xc0005ba3c0) Create stream\nI0512 11:43:51.166542 2798 log.go:172] (0xc000778f20) (0xc0005ba3c0) Stream added, broadcasting: 5\nI0512 11:43:51.167339 2798 log.go:172] (0xc000778f20) Reply frame received for 5\nI0512 11:43:51.246209 2798 log.go:172] (0xc000778f20) Data frame received for 5\nI0512 11:43:51.246252 2798 log.go:172] (0xc0005ba3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0512 11:43:51.246274 2798 log.go:172] (0xc000778f20) Data frame received for 3\nI0512 11:43:51.246317 2798 log.go:172] (0xc000287c20) (3) Data frame handling\nI0512 11:43:51.246333 2798 log.go:172] (0xc000287c20) (3) Data frame sent\nI0512 11:43:51.246348 2798 log.go:172] (0xc000778f20) Data frame received for 3\nI0512 11:43:51.246360 2798 log.go:172] (0xc0005ba3c0) (5) Data frame sent\nI0512 11:43:51.246381 2798 log.go:172] (0xc000778f20) Data frame received for 5\nI0512 11:43:51.246388 2798 log.go:172] (0xc0005ba3c0) (5) Data frame handling\nI0512 11:43:51.246402 2798 log.go:172] (0xc000287c20) (3) Data frame handling\nI0512 11:43:51.247665 2798 log.go:172] (0xc000778f20) Data frame received for 1\nI0512 11:43:51.247679 2798 log.go:172] (0xc0005bb5e0) (1) Data frame handling\nI0512 11:43:51.247685 2798 log.go:172] (0xc0005bb5e0) (1) Data frame sent\nI0512 11:43:51.247691 2798 log.go:172] (0xc000778f20) (0xc0005bb5e0) Stream removed, broadcasting: 1\nI0512 11:43:51.247722 2798 log.go:172] (0xc000778f20) Go away received\nI0512 11:43:51.247930 2798 log.go:172] (0xc000778f20) (0xc0005bb5e0) Stream removed, broadcasting: 1\nI0512 11:43:51.247946 2798 log.go:172] (0xc000778f20) (0xc000287c20) Stream removed, broadcasting: 3\nI0512 11:43:51.247954 2798 log.go:172] (0xc000778f20) (0xc0005ba3c0) Stream removed, broadcasting: 5\n" May 12 11:43:51.252: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 11:43:51.252: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 11:43:51.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 11:43:51.443: INFO: stderr: "I0512 11:43:51.381255 2827 log.go:172] (0xc0006bac60) (0xc0002b8b40) Create stream\nI0512 11:43:51.381314 2827 log.go:172] (0xc0006bac60) (0xc0002b8b40) Stream added, broadcasting: 1\nI0512 11:43:51.384208 2827 log.go:172] (0xc0006bac60) Reply frame received for 1\nI0512 11:43:51.384245 2827 log.go:172] (0xc0006bac60) (0xc0002b8280) Create stream\nI0512 11:43:51.384256 2827 log.go:172] (0xc0006bac60) (0xc0002b8280) Stream added, broadcasting: 3\nI0512 11:43:51.385339 2827 log.go:172] (0xc0006bac60) Reply frame received for 3\nI0512 11:43:51.385376 2827 log.go:172] (0xc0006bac60) (0xc0002b8320) Create stream\nI0512 11:43:51.385394 2827 log.go:172] (0xc0006bac60) (0xc0002b8320) Stream added, broadcasting: 5\nI0512 11:43:51.386014 2827 log.go:172] (0xc0006bac60) Reply frame received for 5\nI0512 11:43:51.437946 2827 log.go:172] (0xc0006bac60) Data frame received for 5\nI0512 11:43:51.437969 2827 log.go:172] (0xc0002b8320) (5) Data frame handling\nI0512 11:43:51.437979 2827 log.go:172] (0xc0002b8320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 11:43:51.437994 2827 log.go:172] (0xc0006bac60) Data frame received for 3\nI0512 11:43:51.437999 2827 log.go:172] (0xc0002b8280) (3) Data frame handling\nI0512 11:43:51.438006 2827 log.go:172] (0xc0002b8280) (3) Data frame sent\nI0512 11:43:51.438012 2827 log.go:172] (0xc0006bac60) Data frame received for 3\nI0512 11:43:51.438018 2827 log.go:172] (0xc0002b8280) (3) Data frame handling\nI0512 11:43:51.438072 2827 log.go:172] (0xc0006bac60) Data frame received for 5\nI0512 11:43:51.438090 2827 log.go:172] (0xc0002b8320) (5) Data frame handling\nI0512 11:43:51.439380 2827 log.go:172] (0xc0006bac60) Data frame received for 1\nI0512 11:43:51.439394 2827 log.go:172] (0xc0002b8b40) (1) Data frame handling\nI0512 11:43:51.439405 2827 log.go:172] (0xc0002b8b40) (1) Data frame sent\nI0512 11:43:51.439416 2827 log.go:172] (0xc0006bac60) (0xc0002b8b40) Stream removed, broadcasting: 1\nI0512 11:43:51.439483 2827 log.go:172] (0xc0006bac60) Go away received\nI0512 11:43:51.439731 2827 log.go:172] (0xc0006bac60) (0xc0002b8b40) Stream removed, broadcasting: 1\nI0512 11:43:51.439744 2827 log.go:172] (0xc0006bac60) (0xc0002b8280) Stream removed, broadcasting: 3\nI0512 11:43:51.439750 2827 log.go:172] (0xc0006bac60) (0xc0002b8320) Stream removed, broadcasting: 5\n" May 12 11:43:51.443: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 11:43:51.443: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 11:43:51.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 11:43:51.634: INFO: stderr: "I0512 11:43:51.562028 2846 log.go:172] (0xc0006d4580) (0xc000606b40) Create stream\nI0512 11:43:51.562113 2846 log.go:172] (0xc0006d4580) (0xc000606b40) Stream added, broadcasting: 1\nI0512 11:43:51.564355 2846 log.go:172] (0xc0006d4580) Reply frame received for 1\nI0512 11:43:51.564381 2846 log.go:172] (0xc0006d4580) (0xc00098c000) Create stream\nI0512 11:43:51.564392 2846 log.go:172] (0xc0006d4580) (0xc00098c000) Stream added, broadcasting: 3\nI0512 11:43:51.565361 2846 log.go:172] (0xc0006d4580) Reply frame received for 3\nI0512 11:43:51.565406 2846 log.go:172] (0xc0006d4580) (0xc00098c0a0) Create stream\nI0512 11:43:51.565424 2846 log.go:172] (0xc0006d4580) (0xc00098c0a0) Stream added, broadcasting: 5\nI0512 11:43:51.566020 2846 log.go:172] (0xc0006d4580) Reply frame received for 5\nI0512 11:43:51.629809 2846 log.go:172] (0xc0006d4580) Data frame received for 3\nI0512 11:43:51.629847 2846 log.go:172] (0xc00098c000) (3) Data frame handling\nI0512 11:43:51.629858 2846 log.go:172] (0xc00098c000) (3) Data frame sent\nI0512 11:43:51.629867 2846 log.go:172] (0xc0006d4580) Data frame received for 3\nI0512 11:43:51.629874 2846 log.go:172] (0xc00098c000) (3) Data frame handling\nI0512 11:43:51.629937 2846 log.go:172] (0xc0006d4580) Data frame received for 5\nI0512 11:43:51.630000 2846 log.go:172] (0xc00098c0a0) (5) Data frame handling\nI0512 11:43:51.630030 2846 log.go:172] (0xc00098c0a0) (5) Data frame sent\nI0512 11:43:51.630048 2846 log.go:172] (0xc0006d4580) Data frame received for 5\nI0512 11:43:51.630058 2846 log.go:172] (0xc00098c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 11:43:51.631074 2846 log.go:172] (0xc0006d4580) Data frame received for 1\nI0512 11:43:51.631100 2846 log.go:172] (0xc000606b40) (1) Data frame handling\nI0512 11:43:51.631132 2846 log.go:172] (0xc000606b40) (1) Data frame sent\nI0512 11:43:51.631310 2846 log.go:172] (0xc0006d4580) (0xc000606b40) Stream removed, broadcasting: 1\nI0512 11:43:51.631357 2846 log.go:172] (0xc0006d4580) Go away received\nI0512 11:43:51.631606 2846 log.go:172] (0xc0006d4580) (0xc000606b40) Stream removed, broadcasting: 1\nI0512 11:43:51.631617 2846 log.go:172] (0xc0006d4580) (0xc00098c000) Stream removed, broadcasting: 3\nI0512 11:43:51.631622 2846 log.go:172] (0xc0006d4580) (0xc00098c0a0) Stream removed, broadcasting: 5\n" May 12 11:43:51.634: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 11:43:51.634: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 11:43:51.660: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 11:43:51.660: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 11:43:51.660: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 11:43:51.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 11:43:51.862: INFO: stderr: "I0512 11:43:51.801273 2865 log.go:172] (0xc0007ae8f0) (0xc0007e05a0) Create stream\nI0512 11:43:51.801331 2865 log.go:172] (0xc0007ae8f0) (0xc0007e05a0) Stream added, broadcasting: 1\nI0512 11:43:51.802940 2865 log.go:172] (0xc0007ae8f0) Reply frame received for 1\nI0512 11:43:51.802965 2865 log.go:172] (0xc0007ae8f0) (0xc000662000) Create stream\nI0512 11:43:51.802975 2865 log.go:172] (0xc0007ae8f0) (0xc000662000) Stream added, broadcasting: 3\nI0512 11:43:51.803562 2865 log.go:172] (0xc0007ae8f0) Reply frame received for 3\nI0512 11:43:51.803587 2865 log.go:172] (0xc0007ae8f0) (0xc000662640) Create stream\nI0512 11:43:51.803599 2865 log.go:172] (0xc0007ae8f0) (0xc000662640) Stream added, broadcasting: 5\nI0512 11:43:51.804194 2865 log.go:172] (0xc0007ae8f0) Reply frame received for 5\nI0512 11:43:51.857821 2865 log.go:172] (0xc0007ae8f0) Data frame received for 5\nI0512 11:43:51.857857 2865 log.go:172] (0xc000662640) (5) Data frame handling\nI0512 11:43:51.857884 2865 log.go:172] (0xc000662640) (5) Data frame sent\nI0512 11:43:51.857901 2865 log.go:172] (0xc0007ae8f0) Data frame received for 5\nI0512 11:43:51.857932 2865 log.go:172] (0xc000662640) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 11:43:51.857959 2865 log.go:172] (0xc0007ae8f0) Data frame received for 3\nI0512 11:43:51.857980 2865 log.go:172] (0xc000662000) (3) Data frame handling\nI0512 11:43:51.858000 2865 log.go:172] (0xc000662000) (3) Data frame sent\nI0512 11:43:51.858011 2865 log.go:172] (0xc0007ae8f0) Data frame received for 3\nI0512 11:43:51.858019 2865 log.go:172] (0xc000662000) (3) Data frame handling\nI0512 11:43:51.859448 2865 log.go:172] (0xc0007ae8f0) Data frame received for 1\nI0512 11:43:51.859472 2865 log.go:172] (0xc0007e05a0) (1) Data frame handling\nI0512 11:43:51.859489 2865 log.go:172] (0xc0007e05a0) (1) Data frame sent\nI0512 11:43:51.859506 2865 log.go:172] (0xc0007ae8f0) (0xc0007e05a0) Stream removed, broadcasting: 1\nI0512 11:43:51.859525 2865 log.go:172] (0xc0007ae8f0) Go away received\nI0512 11:43:51.859912 2865 log.go:172] (0xc0007ae8f0) (0xc0007e05a0) Stream removed, broadcasting: 1\nI0512 11:43:51.859932 2865 log.go:172] (0xc0007ae8f0) (0xc000662000) Stream removed, broadcasting: 3\nI0512 11:43:51.859943 2865 log.go:172] (0xc0007ae8f0) (0xc000662640) Stream removed, broadcasting: 5\n" May 12 11:43:51.863: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 11:43:51.863: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 11:43:51.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 11:43:52.105: INFO: stderr: "I0512 11:43:51.988116 2882 log.go:172] (0xc000400420) (0xc000398820) Create stream\nI0512 11:43:51.988179 2882 log.go:172] (0xc000400420) (0xc000398820) Stream added, broadcasting: 1\nI0512 11:43:51.992050 2882 log.go:172] (0xc000400420) Reply frame received for 1\nI0512 11:43:51.992118 2882 log.go:172] (0xc000400420) (0xc000398000) Create stream\nI0512 11:43:51.992145 2882 log.go:172] (0xc000400420) (0xc000398000) Stream added, broadcasting: 3\nI0512 11:43:51.993586 2882 log.go:172] (0xc000400420) Reply frame received for 3\nI0512 11:43:51.993626 2882 log.go:172] (0xc000400420) (0xc000398140) Create stream\nI0512 11:43:51.993638 2882 log.go:172] (0xc000400420) (0xc000398140) Stream added, broadcasting: 5\nI0512 11:43:51.994655 2882 log.go:172] (0xc000400420) Reply frame received for 5\nI0512 11:43:52.046569 2882 log.go:172] (0xc000400420) Data frame received for 5\nI0512 11:43:52.046592 2882 log.go:172] (0xc000398140) (5) Data frame handling\nI0512 11:43:52.046604 2882 log.go:172] (0xc000398140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 11:43:52.096176 2882 log.go:172] (0xc000400420) Data frame received for 3\nI0512 11:43:52.096199 2882 log.go:172] (0xc000398000) (3) Data frame handling\nI0512 11:43:52.096212 2882 log.go:172] (0xc000398000) (3) Data frame sent\nI0512 11:43:52.096370 2882 log.go:172] (0xc000400420) Data frame received for 5\nI0512 11:43:52.096381 2882 log.go:172] (0xc000398140) (5) Data frame handling\nI0512 11:43:52.096482 2882 log.go:172] (0xc000400420) Data frame received for 3\nI0512 11:43:52.096495 2882 log.go:172] (0xc000398000) (3) Data frame handling\nI0512 11:43:52.098896 2882 log.go:172] (0xc000400420) Data frame received for 1\nI0512 11:43:52.098907 2882 log.go:172] (0xc000398820) (1) Data frame handling\nI0512 11:43:52.098915 2882 log.go:172] (0xc000398820) (1) Data frame sent\nI0512 11:43:52.099094 2882 log.go:172] (0xc000400420) (0xc000398820) Stream removed, broadcasting: 1\nI0512 11:43:52.099180 2882 log.go:172] (0xc000400420) Go away received\nI0512 11:43:52.099436 2882 log.go:172] (0xc000400420) (0xc000398820) Stream removed, broadcasting: 1\nI0512 11:43:52.099458 2882 log.go:172] (0xc000400420) (0xc000398000) Stream removed, broadcasting: 3\nI0512 11:43:52.099472 2882 log.go:172] (0xc000400420) (0xc000398140) Stream removed, broadcasting: 5\n" May 12 11:43:52.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 11:43:52.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 11:43:52.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3019 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 11:43:52.337: INFO: stderr: "I0512 11:43:52.243812 2902 log.go:172] (0xc000708a50) (0xc00098e780) Create stream\nI0512 11:43:52.243860 2902 log.go:172] (0xc000708a50) (0xc00098e780) Stream added, broadcasting: 1\nI0512 11:43:52.245476 2902 log.go:172] (0xc000708a50) Reply frame received for 1\nI0512 11:43:52.245502 2902 log.go:172] (0xc000708a50) (0xc000942000) Create stream\nI0512 11:43:52.245517 2902 log.go:172] (0xc000708a50) (0xc000942000) Stream added, broadcasting: 3\nI0512 11:43:52.246107 2902 log.go:172] (0xc000708a50) Reply frame received for 3\nI0512 11:43:52.246140 2902 log.go:172] (0xc000708a50) (0xc00098e820) Create stream\nI0512 11:43:52.246152 2902 log.go:172] (0xc000708a50) (0xc00098e820) Stream added, broadcasting: 5\nI0512 11:43:52.246804 2902 log.go:172] (0xc000708a50) Reply frame received for 5\nI0512 11:43:52.302166 2902 log.go:172] (0xc000708a50) Data frame received for 5\nI0512 11:43:52.302190 2902 log.go:172] (0xc00098e820) (5) Data frame handling\nI0512 11:43:52.302204 2902 log.go:172] (0xc00098e820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0512 11:43:52.330226 2902 log.go:172] (0xc000708a50) Data frame received for 3\nI0512 11:43:52.330245 2902 log.go:172] (0xc000942000) (3) Data frame handling\nI0512 11:43:52.330255 2902 log.go:172] (0xc000942000) (3) Data frame sent\nI0512 11:43:52.330262 2902 log.go:172] (0xc000708a50) Data frame received for 3\nI0512 11:43:52.330267 2902 log.go:172] (0xc000942000) (3) Data frame handling\nI0512 11:43:52.330698 2902 log.go:172] (0xc000708a50) Data frame received for 5\nI0512 11:43:52.330719 2902 log.go:172] (0xc00098e820) (5) Data frame handling\nI0512 11:43:52.332287 2902 log.go:172] (0xc000708a50) Data frame received for 1\nI0512 11:43:52.332303 2902 log.go:172] (0xc00098e780) (1) Data frame handling\nI0512 11:43:52.332317 2902 log.go:172] (0xc00098e780) (1) Data frame sent\nI0512 11:43:52.332327 2902 log.go:172] (0xc000708a50) (0xc00098e780) Stream removed, broadcasting: 1\nI0512 11:43:52.332548 2902 log.go:172] (0xc000708a50) Go away received\nI0512 11:43:52.332611 2902 log.go:172] (0xc000708a50) (0xc00098e780) Stream removed, broadcasting: 1\nI0512 11:43:52.332626 2902 log.go:172] (0xc000708a50) (0xc000942000) Stream removed, broadcasting: 3\nI0512 11:43:52.332641 2902 log.go:172] (0xc000708a50) (0xc00098e820) Stream removed, broadcasting: 5\n" May 12 11:43:52.337: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 11:43:52.337: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 11:43:52.337: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:43:52.340: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 11:44:02.396: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 11:44:02.396: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 11:44:02.396: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 11:44:02.434: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:02.435: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:02.435: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:02.435: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:02.435: INFO: May 12 11:44:02.435: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:03.673: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:03.673: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:03.673: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:03.673: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:03.673: INFO: May 12 11:44:03.673: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:04.810: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:04.810: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:04.810: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:04.810: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:04.810: INFO: May 12 11:44:04.810: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:05.899: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:05.899: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:05.899: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:05.899: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:05.899: INFO: May 12 11:44:05.899: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:06.904: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:06.904: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:06.904: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:06.904: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:06.904: INFO: May 12 11:44:06.904: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:07.908: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:07.908: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:07.908: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:07.908: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:07.908: INFO: May 12 11:44:07.908: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:08.912: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:08.912: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:08.912: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:08.912: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:08.912: INFO: May 12 11:44:08.912: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:09.922: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:09.923: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:09.923: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:09.923: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:09.923: INFO: May 12 11:44:09.923: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:10.926: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:10.926: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:10.927: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:10.927: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:10.927: INFO: May 12 11:44:10.927: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 11:44:11.930: INFO: POD NODE PHASE GRACE CONDITIONS May 12 11:44:11.930: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:14 +0000 UTC }] May 12 11:44:11.930: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:43:34 +0000 UTC }] May 12 11:44:11.930: INFO: May 12 11:44:11.930: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3019 May 12 11:44:12.934: INFO: Scaling statefulset ss to 0 May 12 11:44:12.942: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 12 11:44:12.944: INFO: Deleting all statefulset in ns statefulset-3019 May 12 11:44:12.946: INFO: Scaling statefulset ss to 0 May 12 11:44:12.952: INFO: Waiting for statefulset status.replicas updated to 0 May 12 11:44:12.954: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:44:12.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3019" for this suite. May 12 11:44:21.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:44:21.090: INFO: namespace statefulset-3019 deletion completed in 8.110147022s • [SLOW TEST:67.733 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:44:21.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 11:44:21.294: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:21.299: INFO: Number of nodes with available pods: 0 May 12 11:44:21.299: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:22.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:22.519: INFO: Number of nodes with available pods: 0 May 12 11:44:22.519: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:23.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:23.307: INFO: Number of nodes with available pods: 0 May 12 11:44:23.307: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:24.499: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:24.502: INFO: Number of nodes with available pods: 0 May 12 11:44:24.502: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:25.464: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:25.467: INFO: Number of nodes with available pods: 0 May 12 11:44:25.467: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:26.631: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:26.698: INFO: Number of nodes with available pods: 0 May 12 11:44:26.698: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:27.362: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:27.364: INFO: Number of nodes with available pods: 0 May 12 11:44:27.364: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:28.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:28.648: INFO: Number of nodes with available pods: 0 May 12 11:44:28.648: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:29.302: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:29.304: INFO: Number of nodes with available pods: 0 May 12 11:44:29.304: INFO: Node iruya-worker is running more than one daemon pod May 12 11:44:30.398: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:30.402: INFO: Number of nodes with available pods: 2 May 12 11:44:30.402: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 11:44:31.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:31.363: INFO: Number of nodes with available pods: 1 May 12 11:44:31.363: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:32.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:32.370: INFO: Number of nodes with available pods: 1 May 12 11:44:32.370: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:33.522: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:33.525: INFO: Number of nodes with available pods: 1 May 12 11:44:33.525: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:34.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:34.384: INFO: Number of nodes with available pods: 1 May 12 11:44:34.384: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:35.812: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:35.954: INFO: Number of nodes with available pods: 1 May 12 11:44:35.954: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:36.369: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:36.372: INFO: Number of nodes with available pods: 1 May 12 11:44:36.372: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:37.434: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:37.600: INFO: Number of nodes with available pods: 1 May 12 11:44:37.600: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:38.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:38.371: INFO: Number of nodes with available pods: 1 May 12 11:44:38.371: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:39.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:39.460: INFO: Number of nodes with available pods: 1 May 12 11:44:39.461: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:40.487: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:40.489: INFO: Number of nodes with available pods: 1 May 12 11:44:40.489: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:41.511: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:41.515: INFO: Number of nodes with available pods: 1 May 12 11:44:41.515: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:42.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:42.371: INFO: Number of nodes with available pods: 1 May 12 11:44:42.371: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:43.620: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:43.624: INFO: Number of nodes with available pods: 1 May 12 11:44:43.624: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:44.476: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:44.479: INFO: Number of nodes with available pods: 1 May 12 11:44:44.479: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:45.835: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:45.839: INFO: Number of nodes with available pods: 1 May 12 11:44:45.839: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:46.448: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:46.452: INFO: Number of nodes with available pods: 1 May 12 11:44:46.452: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:47.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:47.567: INFO: Number of nodes with available pods: 1 May 12 11:44:47.567: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:48.609: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:48.680: INFO: Number of nodes with available pods: 1 May 12 11:44:48.680: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:49.777: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:49.780: INFO: Number of nodes with available pods: 1 May 12 11:44:49.780: INFO: Node iruya-worker2 is running more than one daemon pod May 12 11:44:50.375: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 11:44:50.377: INFO: Number of nodes with available pods: 2 May 12 11:44:50.377: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7004, will wait for the garbage collector to delete the pods May 12 11:44:51.351: INFO: Deleting DaemonSet.extensions daemon-set took: 593.556376ms May 12 11:44:51.752: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.197531ms May 12 11:45:02.255: INFO: Number of nodes with available pods: 0 May 12 11:45:02.255: INFO: Number of running nodes: 0, number of available pods: 0 May 12 11:45:02.258: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7004/daemonsets","resourceVersion":"10471654"},"items":null} May 12 11:45:02.260: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7004/pods","resourceVersion":"10471654"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:45:02.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7004" for this suite. May 12 11:45:10.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:45:10.417: INFO: namespace daemonsets-7004 deletion completed in 8.144231184s • [SLOW TEST:49.327 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:45:10.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2565 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:45:10.594: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:45:44.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.230:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2565 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:45:44.785: INFO: >>> kubeConfig: /root/.kube/config I0512 11:45:44.811054 6 log.go:172] (0xc001600370) (0xc002fc6820) Create stream I0512 11:45:44.811079 6 log.go:172] (0xc001600370) (0xc002fc6820) Stream added, broadcasting: 1 I0512 11:45:44.812407 6 log.go:172] (0xc001600370) Reply frame received for 1 I0512 11:45:44.812437 6 log.go:172] (0xc001600370) (0xc00352cbe0) Create stream I0512 11:45:44.812446 6 log.go:172] (0xc001600370) (0xc00352cbe0) Stream added, broadcasting: 3 I0512 11:45:44.813485 6 log.go:172] (0xc001600370) Reply frame received for 3 I0512 11:45:44.813534 6 log.go:172] (0xc001600370) (0xc00352cc80) Create stream I0512 11:45:44.813546 6 log.go:172] (0xc001600370) (0xc00352cc80) Stream added, broadcasting: 5 I0512 11:45:44.814391 6 log.go:172] (0xc001600370) Reply frame received for 5 I0512 11:45:44.952631 6 log.go:172] (0xc001600370) Data frame received for 3 I0512 11:45:44.952680 6 log.go:172] (0xc00352cbe0) (3) Data frame handling I0512 11:45:44.952707 6 log.go:172] (0xc00352cbe0) (3) Data frame sent I0512 11:45:44.952745 6 log.go:172] (0xc001600370) Data frame received for 3 I0512 11:45:44.952758 6 log.go:172] (0xc00352cbe0) (3) Data frame handling I0512 11:45:44.952932 6 log.go:172] (0xc001600370) Data frame received for 5 I0512 11:45:44.952948 6 log.go:172] (0xc00352cc80) (5) Data frame handling I0512 11:45:44.954461 6 log.go:172] (0xc001600370) Data frame received for 1 I0512 11:45:44.954485 6 log.go:172] (0xc002fc6820) (1) Data frame handling I0512 11:45:44.954502 6 log.go:172] (0xc002fc6820) (1) Data frame sent I0512 11:45:44.954513 6 log.go:172] (0xc001600370) (0xc002fc6820) Stream removed, broadcasting: 1 I0512 11:45:44.954528 6 log.go:172] (0xc001600370) Go away received I0512 11:45:44.954722 6 log.go:172] (0xc001600370) (0xc002fc6820) Stream removed, broadcasting: 1 I0512 11:45:44.954758 6 log.go:172] (0xc001600370) (0xc00352cbe0) Stream removed, broadcasting: 3 I0512 11:45:44.954776 6 log.go:172] (0xc001600370) (0xc00352cc80) Stream removed, broadcasting: 5 May 12 11:45:44.954: INFO: Found all expected endpoints: [netserver-0] May 12 11:45:44.996: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2565 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:45:44.996: INFO: >>> kubeConfig: /root/.kube/config I0512 11:45:45.026262 6 log.go:172] (0xc001d2e370) (0xc00352d040) Create stream I0512 11:45:45.026314 6 log.go:172] (0xc001d2e370) (0xc00352d040) Stream added, broadcasting: 1 I0512 11:45:45.027603 6 log.go:172] (0xc001d2e370) Reply frame received for 1 I0512 11:45:45.027639 6 log.go:172] (0xc001d2e370) (0xc0013bfd60) Create stream I0512 11:45:45.027663 6 log.go:172] (0xc001d2e370) (0xc0013bfd60) Stream added, broadcasting: 3 I0512 11:45:45.028353 6 log.go:172] (0xc001d2e370) Reply frame received for 3 I0512 11:45:45.028383 6 log.go:172] (0xc001d2e370) (0xc002fc68c0) Create stream I0512 11:45:45.028394 6 log.go:172] (0xc001d2e370) (0xc002fc68c0) Stream added, broadcasting: 5 I0512 11:45:45.029094 6 log.go:172] (0xc001d2e370) Reply frame received for 5 I0512 11:45:45.104701 6 log.go:172] (0xc001d2e370) Data frame received for 3 I0512 11:45:45.104740 6 log.go:172] (0xc0013bfd60) (3) Data frame handling I0512 11:45:45.104766 6 log.go:172] (0xc0013bfd60) (3) Data frame sent I0512 11:45:45.105033 6 log.go:172] (0xc001d2e370) Data frame received for 3 I0512 11:45:45.105054 6 log.go:172] (0xc0013bfd60) (3) Data frame handling I0512 11:45:45.105077 6 log.go:172] (0xc001d2e370) Data frame received for 5 I0512 11:45:45.105106 6 log.go:172] (0xc002fc68c0) (5) Data frame handling I0512 11:45:45.106383 6 log.go:172] (0xc001d2e370) Data frame received for 1 I0512 11:45:45.106409 6 log.go:172] (0xc00352d040) (1) Data frame handling I0512 11:45:45.106441 6 log.go:172] (0xc00352d040) (1) Data frame sent I0512 11:45:45.106544 6 log.go:172] (0xc001d2e370) (0xc00352d040) Stream removed, broadcasting: 1 I0512 11:45:45.106621 6 log.go:172] (0xc001d2e370) (0xc00352d040) Stream removed, broadcasting: 1 I0512 11:45:45.106634 6 log.go:172] (0xc001d2e370) (0xc0013bfd60) Stream removed, broadcasting: 3 I0512 11:45:45.106682 6 log.go:172] (0xc001d2e370) Go away received I0512 11:45:45.106792 6 log.go:172] (0xc001d2e370) (0xc002fc68c0) Stream removed, broadcasting: 5 May 12 11:45:45.106: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:45:45.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2565" for this suite. May 12 11:46:09.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:46:09.343: INFO: namespace pod-network-test-2565 deletion completed in 24.226662548s • [SLOW TEST:58.926 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:46:09.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 11:46:09.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3115' May 12 11:46:09.997: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 11:46:09.997: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 12 11:46:12.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3115' May 12 11:46:12.697: INFO: stderr: "" May 12 11:46:12.697: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:46:12.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3115" for this suite. May 12 11:46:19.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:46:19.597: INFO: namespace kubectl-3115 deletion completed in 6.855050178s • [SLOW TEST:10.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:46:19.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:46:27.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7355" for this suite. May 12 11:47:15.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:15.918: INFO: namespace kubelet-test-7355 deletion completed in 48.095630469s • [SLOW TEST:56.320 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:47:15.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:47:16.062: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 6.627924ms) May 12 11:47:16.066: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.71924ms) May 12 11:47:16.070: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.98217ms) May 12 11:47:16.077: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 7.285152ms) May 12 11:47:16.080: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.738771ms) May 12 11:47:16.083: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.010713ms) May 12 11:47:16.086: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.460866ms) May 12 11:47:16.088: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.763594ms) May 12 11:47:16.091: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.473035ms) May 12 11:47:16.093: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.443613ms) May 12 11:47:16.096: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.637904ms) May 12 11:47:16.099: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.673535ms) May 12 11:47:16.101: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.561798ms) May 12 11:47:16.104: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.624213ms) May 12 11:47:16.107: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.843306ms) May 12 11:47:16.110: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.01549ms) May 12 11:47:16.113: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.574167ms) May 12 11:47:16.116: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.930539ms) May 12 11:47:16.118: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.948527ms) May 12 11:47:16.122: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.027219ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:47:16.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3404" for this suite. May 12 11:47:22.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:22.263: INFO: namespace proxy-3404 deletion completed in 6.138742979s • [SLOW TEST:6.345 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:47:22.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:47:29.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3874" for this suite. May 12 11:47:35.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:35.651: INFO: namespace namespaces-3874 deletion completed in 6.527859317s STEP: Destroying namespace "nsdeletetest-7588" for this suite. May 12 11:47:35.652: INFO: Namespace nsdeletetest-7588 was already deleted STEP: Destroying namespace "nsdeletetest-1978" for this suite. May 12 11:47:41.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:47:41.923: INFO: namespace nsdeletetest-1978 deletion completed in 6.27087632s • [SLOW TEST:19.660 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:47:41.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:47:42.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1597" for this suite. May 12 11:48:05.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:48:05.100: INFO: namespace pods-1597 deletion completed in 22.82030032s • [SLOW TEST:23.177 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:48:05.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 12 11:48:05.310: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:48:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4093" for this suite. May 12 11:48:13.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:48:13.499: INFO: namespace kubectl-4093 deletion completed in 8.097625853s • [SLOW TEST:8.399 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:48:13.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 11:48:21.876: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:48:23.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3678" for this suite. May 12 11:48:51.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:48:51.223: INFO: namespace replicaset-3678 deletion completed in 28.097131774s • [SLOW TEST:37.724 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:48:51.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0512 11:48:52.773830 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 11:48:52.773: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:48:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9519" for this suite. May 12 11:49:00.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:49:01.012: INFO: namespace gc-9519 deletion completed in 8.235945365s • [SLOW TEST:9.788 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:49:01.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 12 11:49:02.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554" in namespace "downward-api-4628" to be "success or failure" May 12 11:49:02.207: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Pending", Reason="", readiness=false. Elapsed: 3.474813ms May 12 11:49:04.257: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053448391s May 12 11:49:06.287: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083463319s May 12 11:49:08.290: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086827756s May 12 11:49:10.714: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51066696s May 12 11:49:12.718: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.514746827s STEP: Saw pod success May 12 11:49:12.718: INFO: Pod "downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554" satisfied condition "success or failure" May 12 11:49:12.721: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554 container client-container: STEP: delete the pod May 12 11:49:13.020: INFO: Waiting for pod downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554 to disappear May 12 11:49:13.029: INFO: Pod downwardapi-volume-f702c829-36ab-4fe4-a346-b266d6fb2554 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:49:13.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4628" for this suite. May 12 11:49:19.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:49:19.963: INFO: namespace downward-api-4628 deletion completed in 6.929780807s • [SLOW TEST:18.951 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:49:19.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7682 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 11:49:20.144: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 11:49:51.425: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostName&protocol=udp&host=10.244.2.243&port=8081&tries=1'] Namespace:pod-network-test-7682 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:49:51.425: INFO: >>> kubeConfig: /root/.kube/config I0512 11:49:51.505263 6 log.go:172] (0xc000c13b80) (0xc002301c20) Create stream I0512 11:49:51.505287 6 log.go:172] (0xc000c13b80) (0xc002301c20) Stream added, broadcasting: 1 I0512 11:49:51.506635 6 log.go:172] (0xc000c13b80) Reply frame received for 1 I0512 11:49:51.506660 6 log.go:172] (0xc000c13b80) (0xc002301d60) Create stream I0512 11:49:51.506668 6 log.go:172] (0xc000c13b80) (0xc002301d60) Stream added, broadcasting: 3 I0512 11:49:51.507412 6 log.go:172] (0xc000c13b80) Reply frame received for 3 I0512 11:49:51.507444 6 log.go:172] (0xc000c13b80) (0xc00352c500) Create stream I0512 11:49:51.507458 6 log.go:172] (0xc000c13b80) (0xc00352c500) Stream added, broadcasting: 5 I0512 11:49:51.508089 6 log.go:172] (0xc000c13b80) Reply frame received for 5 I0512 11:49:51.598539 6 log.go:172] (0xc000c13b80) Data frame received for 3 I0512 11:49:51.598563 6 log.go:172] (0xc002301d60) (3) Data frame handling I0512 11:49:51.598576 6 log.go:172] (0xc002301d60) (3) Data frame sent I0512 11:49:51.599064 6 log.go:172] (0xc000c13b80) Data frame received for 5 I0512 11:49:51.599091 6 log.go:172] (0xc00352c500) (5) Data frame handling I0512 11:49:51.599267 6 log.go:172] (0xc000c13b80) Data frame received for 3 I0512 11:49:51.599299 6 log.go:172] (0xc002301d60) (3) Data frame handling I0512 11:49:51.602443 6 log.go:172] (0xc000c13b80) Data frame received for 1 I0512 11:49:51.602471 6 log.go:172] (0xc002301c20) (1) Data frame handling I0512 11:49:51.602489 6 log.go:172] (0xc002301c20) (1) Data frame sent I0512 11:49:51.602505 6 log.go:172] (0xc000c13b80) (0xc002301c20) Stream removed, broadcasting: 1 I0512 11:49:51.602521 6 log.go:172] (0xc000c13b80) Go away received I0512 11:49:51.602722 6 log.go:172] (0xc000c13b80) (0xc002301c20) Stream removed, broadcasting: 1 I0512 11:49:51.602741 6 log.go:172] (0xc000c13b80) (0xc002301d60) Stream removed, broadcasting: 3 I0512 11:49:51.602754 6 log.go:172] (0xc000c13b80) (0xc00352c500) Stream removed, broadcasting: 5 May 12 11:49:51.602: INFO: Waiting for endpoints: map[] May 12 11:49:51.653: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostName&protocol=udp&host=10.244.1.236&port=8081&tries=1'] Namespace:pod-network-test-7682 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 11:49:51.653: INFO: >>> kubeConfig: /root/.kube/config I0512 11:49:51.693311 6 log.go:172] (0xc002566000) (0xc00352c960) Create stream I0512 11:49:51.693343 6 log.go:172] (0xc002566000) (0xc00352c960) Stream added, broadcasting: 1 I0512 11:49:51.694827 6 log.go:172] (0xc002566000) Reply frame received for 1 I0512 11:49:51.694871 6 log.go:172] (0xc002566000) (0xc0018c61e0) Create stream I0512 11:49:51.694893 6 log.go:172] (0xc002566000) (0xc0018c61e0) Stream added, broadcasting: 3 I0512 11:49:51.695792 6 log.go:172] (0xc002566000) Reply frame received for 3 I0512 11:49:51.695833 6 log.go:172] (0xc002566000) (0xc002d98000) Create stream I0512 11:49:51.695847 6 log.go:172] (0xc002566000) (0xc002d98000) Stream added, broadcasting: 5 I0512 11:49:51.696658 6 log.go:172] (0xc002566000) Reply frame received for 5 I0512 11:49:51.771401 6 log.go:172] (0xc002566000) Data frame received for 3 I0512 11:49:51.771447 6 log.go:172] (0xc0018c61e0) (3) Data frame handling I0512 11:49:51.771485 6 log.go:172] (0xc0018c61e0) (3) Data frame sent I0512 11:49:51.772063 6 log.go:172] (0xc002566000) Data frame received for 5 I0512 11:49:51.772106 6 log.go:172] (0xc002d98000) (5) Data frame handling I0512 11:49:51.772143 6 log.go:172] (0xc002566000) Data frame received for 3 I0512 11:49:51.772173 6 log.go:172] (0xc0018c61e0) (3) Data frame handling I0512 11:49:51.773580 6 log.go:172] (0xc002566000) Data frame received for 1 I0512 11:49:51.773600 6 log.go:172] (0xc00352c960) (1) Data frame handling I0512 11:49:51.773611 6 log.go:172] (0xc00352c960) (1) Data frame sent I0512 11:49:51.773625 6 log.go:172] (0xc002566000) (0xc00352c960) Stream removed, broadcasting: 1 I0512 11:49:51.773647 6 log.go:172] (0xc002566000) Go away received I0512 11:49:51.773761 6 log.go:172] (0xc002566000) (0xc00352c960) Stream removed, broadcasting: 1 I0512 11:49:51.773780 6 log.go:172] (0xc002566000) (0xc0018c61e0) Stream removed, broadcasting: 3 I0512 11:49:51.773789 6 log.go:172] (0xc002566000) (0xc002d98000) Stream removed, broadcasting: 5 May 12 11:49:51.773: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:49:51.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7682" for this suite. May 12 11:50:19.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:50:20.030: INFO: namespace pod-network-test-7682 deletion completed in 28.118469054s • [SLOW TEST:60.066 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:50:20.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:50:32.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5867" for this suite. May 12 11:51:24.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:51:24.274: INFO: namespace kubelet-test-5867 deletion completed in 52.095559497s • [SLOW TEST:64.244 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:51:24.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:51:24.321: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 11:51:24.342: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 11:51:29.619: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 11:51:29.619: INFO: Creating deployment "test-rolling-update-deployment" May 12 11:51:29.623: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 11:51:29.951: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 11:51:32.218: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 11:51:32.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:51:34.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724881090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 11:51:36.710: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 12 11:51:37.240: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-539,SelfLink:/apis/apps/v1/namespaces/deployment-539/deployments/test-rolling-update-deployment,UID:a5167564-a392-4044-b08b-9d5dd57c9758,ResourceVersion:10472856,Generation:1,CreationTimestamp:2020-05-12 11:51:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 11:51:30 +0000 UTC 2020-05-12 11:51:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 11:51:36 +0000 UTC 2020-05-12 11:51:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 11:51:37.244: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-539,SelfLink:/apis/apps/v1/namespaces/deployment-539/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:8cf60117-d1ad-4e97-9df7-9927be116545,ResourceVersion:10472844,Generation:1,CreationTimestamp:2020-05-12 11:51:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a5167564-a392-4044-b08b-9d5dd57c9758 0xc0029cf507 0xc0029cf508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 11:51:37.244: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 11:51:37.244: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-539,SelfLink:/apis/apps/v1/namespaces/deployment-539/replicasets/test-rolling-update-controller,UID:7898d252-2cdb-4a3b-88e3-750479952558,ResourceVersion:10472854,Generation:2,CreationTimestamp:2020-05-12 11:51:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a5167564-a392-4044-b08b-9d5dd57c9758 0xc0029cf427 0xc0029cf428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 11:51:37.247: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-z5krm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-z5krm,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-539,SelfLink:/api/v1/namespaces/deployment-539/pods/test-rolling-update-deployment-79f6b9d75c-z5krm,UID:e8cd3fe5-044f-4c99-adfa-71c1af319a62,ResourceVersion:10472843,Generation:0,CreationTimestamp:2020-05-12 11:51:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 8cf60117-d1ad-4e97-9df7-9927be116545 0xc002abc9c7 0xc002abc9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x2gk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x2gk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x2gk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002abca50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002abca70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:51:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:51:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:51:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 11:51:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.239,StartTime:2020-05-12 11:51:30 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 11:51:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://575dfe1a95083ff0e74690697f3b731961ad113c4996bccf71af0a330829623c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:51:37.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-539" for this suite. May 12 11:51:47.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:51:47.792: INFO: namespace deployment-539 deletion completed in 10.541585919s • [SLOW TEST:23.517 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 12 11:51:47.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 12 11:51:49.428: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9929242e-039d-41ae-a799-7b0d16736445", Controller:(*bool)(0xc0025482aa), BlockOwnerDeletion:(*bool)(0xc0025482ab)}} May 12 11:51:49.847: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"431a3d43-5911-41ec-9c9b-a9bc28781c80", Controller:(*bool)(0xc00254869a), BlockOwnerDeletion:(*bool)(0xc00254869b)}} May 12 11:51:50.041: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f32f8fe6-89bf-4a51-ae43-dc43f3fbd391", Controller:(*bool)(0xc003536f4a), BlockOwnerDeletion:(*bool)(0xc003536f4b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 12 11:51:55.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6576" for this suite. May 12 11:52:03.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 11:52:03.898: INFO: namespace gc-6576 deletion completed in 8.479107617s • [SLOW TEST:16.106 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSMay 12 11:52:03.898: INFO: Running AfterSuite actions on all nodes May 12 11:52:03.898: INFO: Running AfterSuite actions on node 1 May 12 11:52:03.898: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6980.855 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS