I0603 12:55:56.558649 6 e2e.go:243] Starting e2e run "bae40aaf-a3eb-4160-9ff1-016a42a00545" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591188955 - Will randomize all specs Will run 215 of 4412 specs Jun 3 12:55:56.742: INFO: >>> kubeConfig: /root/.kube/config Jun 3 12:55:56.747: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 12:55:56.794: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 12:55:56.820: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 12:55:56.820: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 3 12:55:56.820: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 12:55:56.830: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 3 12:55:56.830: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 12:55:56.830: INFO: e2e test version: v1.15.11 Jun 3 12:55:56.832: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:55:56.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jun 3 12:55:56.894: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 3 12:56:01.435: INFO: Successfully updated pod "annotationupdateec03038b-e215-4ea2-8c92-05d1ef8b3299" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:56:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7991" for this suite. Jun 3 12:56:25.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:56:25.626: INFO: namespace projected-7991 deletion completed in 22.116730753s • [SLOW TEST:28.793 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:56:25.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 3 12:56:25.684: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:56:34.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1933" for this suite. Jun 3 12:56:56.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:56:56.252: INFO: namespace init-container-1933 deletion completed in 22.092283128s • [SLOW TEST:30.626 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:56:56.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 12:56:56.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0" in namespace "projected-4807" to be "success or failure" Jun 3 12:56:56.331: INFO: Pod "downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689681ms Jun 3 12:56:58.351: INFO: Pod "downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024400961s Jun 3 12:57:00.355: INFO: Pod "downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028796032s STEP: Saw pod success Jun 3 12:57:00.355: INFO: Pod "downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0" satisfied condition "success or failure" Jun 3 12:57:00.359: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0 container client-container: STEP: delete the pod Jun 3 12:57:00.390: INFO: Waiting for pod downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0 to disappear Jun 3 12:57:00.396: INFO: Pod downwardapi-volume-fcfa2eb5-f000-4644-ae86-57fd2640bbb0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:57:00.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4807" for this suite. Jun 3 12:57:06.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:57:06.487: INFO: namespace projected-4807 deletion completed in 6.088006885s • [SLOW TEST:10.235 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:57:06.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-6daec1b6-6138-4d9e-8594-2aed2f22c206 STEP: Creating a pod to test consume secrets Jun 3 12:57:06.602: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1" in namespace "projected-6339" to be "success or failure" Jun 3 12:57:06.614: INFO: Pod "pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19898ms Jun 3 12:57:08.618: INFO: Pod "pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015968329s Jun 3 12:57:10.622: INFO: Pod "pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01990161s STEP: Saw pod success Jun 3 12:57:10.622: INFO: Pod "pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1" satisfied condition "success or failure" Jun 3 12:57:10.626: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1 container projected-secret-volume-test: STEP: delete the pod Jun 3 12:57:10.696: INFO: Waiting for pod pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1 to disappear Jun 3 12:57:10.699: INFO: Pod pod-projected-secrets-5ee79146-8df5-4a0d-8a7f-12ab04aa71b1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:57:10.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6339" for this suite. Jun 3 12:57:16.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:57:16.778: INFO: namespace projected-6339 deletion completed in 6.075757995s • [SLOW TEST:10.290 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:57:16.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 12:57:16.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184" in namespace "projected-5786" to be "success or failure" Jun 3 12:57:16.856: INFO: Pod "downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184": Phase="Pending", Reason="", readiness=false. Elapsed: 15.700088ms Jun 3 12:57:18.860: INFO: Pod "downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020025585s Jun 3 12:57:20.865: INFO: Pod "downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024609361s STEP: Saw pod success Jun 3 12:57:20.865: INFO: Pod "downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184" satisfied condition "success or failure" Jun 3 12:57:20.869: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184 container client-container: STEP: delete the pod Jun 3 12:57:20.948: INFO: Waiting for pod downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184 to disappear Jun 3 12:57:21.127: INFO: Pod downwardapi-volume-5a2aa8d0-9607-4a73-889b-faae81862184 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:57:21.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5786" for this suite. Jun 3 12:57:27.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:57:27.269: INFO: namespace projected-5786 deletion completed in 6.137470446s • [SLOW TEST:10.491 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:57:27.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9886 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 3 12:57:27.378: INFO: Found 0 stateful pods, waiting for 3 Jun 3 12:57:37.382: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:57:37.382: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:57:37.382: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 12:57:47.384: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:57:47.384: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:57:47.384: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 3 12:57:47.410: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 3 12:57:57.467: INFO: Updating stateful set ss2 Jun 3 12:57:57.479: INFO: Waiting for Pod statefulset-9886/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 3 12:58:07.649: INFO: Found 2 stateful pods, waiting for 3 Jun 3 12:58:17.653: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:58:17.653: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 12:58:17.653: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 3 12:58:17.678: INFO: Updating stateful set ss2 Jun 3 12:58:17.710: INFO: Waiting for Pod statefulset-9886/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 12:58:27.717: INFO: Waiting for Pod statefulset-9886/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 12:58:37.736: INFO: Updating stateful set ss2 Jun 3 12:58:37.759: INFO: Waiting for StatefulSet statefulset-9886/ss2 to complete update Jun 3 12:58:37.760: INFO: Waiting for Pod statefulset-9886/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 3 12:58:47.768: INFO: Deleting all statefulset in ns statefulset-9886 Jun 3 12:58:47.772: INFO: Scaling statefulset ss2 to 0 Jun 3 12:59:17.804: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 12:59:17.808: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:59:17.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9886" for this suite. Jun 3 12:59:23.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 12:59:23.934: INFO: namespace statefulset-9886 deletion completed in 6.108403802s • [SLOW TEST:116.666 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 12:59:23.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 12:59:57.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2450" for this suite. Jun 3 13:00:03.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:00:03.755: INFO: namespace container-runtime-2450 deletion completed in 6.134776312s • [SLOW TEST:39.821 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:00:03.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:00:03.851: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96" in namespace "downward-api-9717" to be "success or failure" Jun 3 13:00:03.855: INFO: Pod "downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.698779ms Jun 3 13:00:05.859: INFO: Pod "downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007792652s Jun 3 13:00:07.863: INFO: Pod "downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011785258s STEP: Saw pod success Jun 3 13:00:07.863: INFO: Pod "downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96" satisfied condition "success or failure" Jun 3 13:00:07.865: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96 container client-container: STEP: delete the pod Jun 3 13:00:07.896: INFO: Waiting for pod downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96 to disappear Jun 3 13:00:07.903: INFO: Pod downwardapi-volume-db453050-eae9-47ad-b3af-986acf49dd96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:00:07.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9717" for this suite. Jun 3 13:00:13.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:00:13.994: INFO: namespace downward-api-9717 deletion completed in 6.087556639s • [SLOW TEST:10.238 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:00:13.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 13:00:18.091: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:00:18.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7052" for this suite. Jun 3 13:00:24.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:00:24.193: INFO: namespace container-runtime-7052 deletion completed in 6.082700473s • [SLOW TEST:10.199 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:00:24.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 3 13:00:28.823: INFO: Successfully updated pod "annotationupdate0b6defe3-6eb7-4bc4-b002-7ad27a529b69" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:00:32.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-215" for this suite. Jun 3 13:00:54.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:00:55.022: INFO: namespace downward-api-215 deletion completed in 22.176533411s • [SLOW TEST:30.828 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:00:55.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-11e1cf0d-dd59-45e7-8a49-cda8cb79a13f STEP: Creating a pod to test consume configMaps Jun 3 13:00:55.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7" in namespace "projected-1391" to be "success or failure" Jun 3 13:00:55.119: INFO: Pod "pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.776608ms Jun 3 13:00:57.124: INFO: Pod "pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011042408s Jun 3 13:00:59.127: INFO: Pod "pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014810329s STEP: Saw pod success Jun 3 13:00:59.127: INFO: Pod "pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7" satisfied condition "success or failure" Jun 3 13:00:59.130: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7 container projected-configmap-volume-test: STEP: delete the pod Jun 3 13:00:59.194: INFO: Waiting for pod pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7 to disappear Jun 3 13:00:59.216: INFO: Pod pod-projected-configmaps-ad1eafc4-5f0c-41ee-931a-5fcc5a4e38c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:00:59.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1391" for this suite. Jun 3 13:01:05.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:01:05.359: INFO: namespace projected-1391 deletion completed in 6.139015692s • [SLOW TEST:10.337 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:01:05.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 13:01:05.452: INFO: Waiting up to 5m0s for pod "pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c" in namespace "emptydir-8041" to be "success or failure" Jun 3 13:01:05.455: INFO: Pod "pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384284ms Jun 3 13:01:07.459: INFO: Pod "pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006842308s Jun 3 13:01:09.464: INFO: Pod "pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011770235s STEP: Saw pod success Jun 3 13:01:09.464: INFO: Pod "pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c" satisfied condition "success or failure" Jun 3 13:01:09.467: INFO: Trying to get logs from node iruya-worker2 pod pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c container test-container: STEP: delete the pod Jun 3 13:01:09.526: INFO: Waiting for pod pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c to disappear Jun 3 13:01:09.535: INFO: Pod pod-cfdde9cd-b93a-46e3-ac69-5eaba3f9a14c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:01:09.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8041" for this suite. Jun 3 13:01:15.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:01:15.630: INFO: namespace emptydir-8041 deletion completed in 6.092360123s • [SLOW TEST:10.271 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:01:15.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 3 13:01:15.742: INFO: Waiting up to 5m0s for pod "downward-api-795cd295-5c75-41d0-807f-2db3c075192f" in namespace "downward-api-495" to be "success or failure" Jun 3 13:01:15.751: INFO: Pod "downward-api-795cd295-5c75-41d0-807f-2db3c075192f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712218ms Jun 3 13:01:17.757: INFO: Pod "downward-api-795cd295-5c75-41d0-807f-2db3c075192f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014888765s Jun 3 13:01:19.761: INFO: Pod "downward-api-795cd295-5c75-41d0-807f-2db3c075192f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018698083s STEP: Saw pod success Jun 3 13:01:19.761: INFO: Pod "downward-api-795cd295-5c75-41d0-807f-2db3c075192f" satisfied condition "success or failure" Jun 3 13:01:19.763: INFO: Trying to get logs from node iruya-worker2 pod downward-api-795cd295-5c75-41d0-807f-2db3c075192f container dapi-container: STEP: delete the pod Jun 3 13:01:19.801: INFO: Waiting for pod downward-api-795cd295-5c75-41d0-807f-2db3c075192f to disappear Jun 3 13:01:19.812: INFO: Pod downward-api-795cd295-5c75-41d0-807f-2db3c075192f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:01:19.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-495" for this suite. Jun 3 13:01:25.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:01:25.940: INFO: namespace downward-api-495 deletion completed in 6.116124055s • [SLOW TEST:10.309 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:01:25.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:01:26.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6" in namespace "downward-api-6529" to be "success or failure" Jun 3 13:01:26.041: INFO: Pod "downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.030448ms Jun 3 13:01:28.046: INFO: Pod "downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022774133s Jun 3 13:01:30.051: INFO: Pod "downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027155602s STEP: Saw pod success Jun 3 13:01:30.051: INFO: Pod "downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6" satisfied condition "success or failure" Jun 3 13:01:30.054: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6 container client-container: STEP: delete the pod Jun 3 13:01:30.088: INFO: Waiting for pod downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6 to disappear Jun 3 13:01:30.111: INFO: Pod downwardapi-volume-ada5f46d-2a88-43de-acbe-e5aefd7399d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:01:30.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6529" for this suite. Jun 3 13:01:36.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:01:36.212: INFO: namespace downward-api-6529 deletion completed in 6.096560549s • [SLOW TEST:10.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:01:36.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:01:36.328: INFO: Create a RollingUpdate DaemonSet Jun 3 13:01:36.332: INFO: Check that daemon pods launch on every node of the cluster Jun 3 13:01:36.343: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:36.348: INFO: Number of nodes with available pods: 0 Jun 3 13:01:36.348: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:01:37.352: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:37.354: INFO: Number of nodes with available pods: 0 Jun 3 13:01:37.355: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:01:38.353: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:38.357: INFO: Number of nodes with available pods: 0 Jun 3 13:01:38.357: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:01:39.353: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:39.356: INFO: Number of nodes with available pods: 0 Jun 3 13:01:39.356: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:01:40.352: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:40.355: INFO: Number of nodes with available pods: 1 Jun 3 13:01:40.355: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:01:41.354: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:41.357: INFO: Number of nodes with available pods: 2 Jun 3 13:01:41.357: INFO: Number of running nodes: 2, number of available pods: 2 Jun 3 13:01:41.357: INFO: Update the DaemonSet to trigger a rollout Jun 3 13:01:41.365: INFO: Updating DaemonSet daemon-set Jun 3 13:01:52.392: INFO: Roll back the DaemonSet before rollout is complete Jun 3 13:01:52.399: INFO: Updating DaemonSet daemon-set Jun 3 13:01:52.399: INFO: Make sure DaemonSet rollback is complete Jun 3 13:01:52.404: INFO: Wrong image for pod: daemon-set-7cqdr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 3 13:01:52.404: INFO: Pod daemon-set-7cqdr is not available Jun 3 13:01:52.410: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:53.415: INFO: Wrong image for pod: daemon-set-7cqdr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 3 13:01:53.415: INFO: Pod daemon-set-7cqdr is not available Jun 3 13:01:53.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:54.415: INFO: Wrong image for pod: daemon-set-7cqdr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 3 13:01:54.415: INFO: Pod daemon-set-7cqdr is not available Jun 3 13:01:54.420: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:55.414: INFO: Wrong image for pod: daemon-set-7cqdr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 3 13:01:55.414: INFO: Pod daemon-set-7cqdr is not available Jun 3 13:01:55.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:56.415: INFO: Wrong image for pod: daemon-set-7cqdr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 3 13:01:56.415: INFO: Pod daemon-set-7cqdr is not available Jun 3 13:01:56.418: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:01:57.415: INFO: Pod daemon-set-7xsm5 is not available Jun 3 13:01:57.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5574, will wait for the garbage collector to delete the pods Jun 3 13:01:57.484: INFO: Deleting DaemonSet.extensions daemon-set took: 6.046098ms Jun 3 13:01:57.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.364737ms Jun 3 13:02:00.988: INFO: Number of nodes with available pods: 0 Jun 3 13:02:00.988: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 13:02:00.991: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5574/daemonsets","resourceVersion":"14438192"},"items":null} Jun 3 13:02:00.994: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5574/pods","resourceVersion":"14438192"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:02:01.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5574" for this suite. Jun 3 13:02:07.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:02:07.120: INFO: namespace daemonsets-5574 deletion completed in 6.109382125s • [SLOW TEST:30.907 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:02:07.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 3 13:02:07.222: INFO: Waiting up to 5m0s for pod "pod-45fe4266-ed56-42a4-945c-60e53cca0e17" in namespace "emptydir-3523" to be "success or failure" Jun 3 13:02:07.229: INFO: Pod "pod-45fe4266-ed56-42a4-945c-60e53cca0e17": Phase="Pending", Reason="", readiness=false. Elapsed: 7.259552ms Jun 3 13:02:09.233: INFO: Pod "pod-45fe4266-ed56-42a4-945c-60e53cca0e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011048885s Jun 3 13:02:11.237: INFO: Pod "pod-45fe4266-ed56-42a4-945c-60e53cca0e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015330798s STEP: Saw pod success Jun 3 13:02:11.237: INFO: Pod "pod-45fe4266-ed56-42a4-945c-60e53cca0e17" satisfied condition "success or failure" Jun 3 13:02:11.240: INFO: Trying to get logs from node iruya-worker pod pod-45fe4266-ed56-42a4-945c-60e53cca0e17 container test-container: STEP: delete the pod Jun 3 13:02:11.312: INFO: Waiting for pod pod-45fe4266-ed56-42a4-945c-60e53cca0e17 to disappear Jun 3 13:02:11.351: INFO: Pod pod-45fe4266-ed56-42a4-945c-60e53cca0e17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:02:11.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3523" for this suite. Jun 3 13:02:17.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:02:17.462: INFO: namespace emptydir-3523 deletion completed in 6.107668452s • [SLOW TEST:10.342 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:02:17.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 3 13:02:17.530: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 3 13:02:17.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:20.212: INFO: stderr: "" Jun 3 13:02:20.212: INFO: stdout: "service/redis-slave created\n" Jun 3 13:02:20.212: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 3 13:02:20.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:20.498: INFO: stderr: "" Jun 3 13:02:20.498: INFO: stdout: "service/redis-master created\n" Jun 3 13:02:20.498: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 3 13:02:20.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:20.832: INFO: stderr: "" Jun 3 13:02:20.832: INFO: stdout: "service/frontend created\n" Jun 3 13:02:20.832: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 3 13:02:20.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:21.150: INFO: stderr: "" Jun 3 13:02:21.150: INFO: stdout: "deployment.apps/frontend created\n" Jun 3 13:02:21.150: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 3 13:02:21.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:21.488: INFO: stderr: "" Jun 3 13:02:21.488: INFO: stdout: "deployment.apps/redis-master created\n" Jun 3 13:02:21.488: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 3 13:02:21.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6050' Jun 3 13:02:21.804: INFO: stderr: "" Jun 3 13:02:21.804: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 3 13:02:21.804: INFO: Waiting for all frontend pods to be Running. Jun 3 13:02:31.855: INFO: Waiting for frontend to serve content. Jun 3 13:02:31.920: INFO: Trying to add a new entry to the guestbook. Jun 3 13:02:31.941: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 3 13:02:31.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.093: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 3 13:02:32.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.253: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 3 13:02:32.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.404: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 13:02:32.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.516: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 13:02:32.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.625: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.625: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 3 13:02:32.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6050' Jun 3 13:02:32.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:02:32.734: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:02:32.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6050" for this suite. Jun 3 13:03:12.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:03:12.880: INFO: namespace kubectl-6050 deletion completed in 40.122411919s • [SLOW TEST:55.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:03:12.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-7a4f60f5-b4b8-402c-b887-ceb7774f51a7 STEP: Creating a pod to test consume secrets Jun 3 13:03:13.278: INFO: Waiting up to 5m0s for pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc" in namespace "secrets-8704" to be "success or failure" Jun 3 13:03:13.281: INFO: Pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438164ms Jun 3 13:03:15.286: INFO: Pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007757335s Jun 3 13:03:17.294: INFO: Pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc": Phase="Running", Reason="", readiness=true. Elapsed: 4.016364448s Jun 3 13:03:19.299: INFO: Pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020943202s STEP: Saw pod success Jun 3 13:03:19.299: INFO: Pod "pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc" satisfied condition "success or failure" Jun 3 13:03:19.302: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc container secret-volume-test: STEP: delete the pod Jun 3 13:03:19.408: INFO: Waiting for pod pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc to disappear Jun 3 13:03:19.413: INFO: Pod pod-secrets-20903655-8c9a-4128-a014-fc83ff872acc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:03:19.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8704" for this suite. Jun 3 13:03:25.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:03:25.541: INFO: namespace secrets-8704 deletion completed in 6.119502453s • [SLOW TEST:12.660 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:03:25.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:03:25.690: INFO: Creating ReplicaSet my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e Jun 3 13:03:25.717: INFO: Pod name my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e: Found 0 pods out of 1 Jun 3 13:03:30.723: INFO: Pod name my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e: Found 1 pods out of 1 Jun 3 13:03:30.723: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e" is running Jun 3 13:03:30.727: INFO: Pod "my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e-b5kjm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 13:03:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 13:03:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 13:03:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 13:03:25 +0000 UTC Reason: Message:}]) Jun 3 13:03:30.727: INFO: Trying to dial the pod Jun 3 13:03:35.756: INFO: Controller my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e: Got expected result from replica 1 [my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e-b5kjm]: "my-hostname-basic-18132c81-0101-4fc1-a4f7-e71737460d1e-b5kjm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:03:35.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8437" for this suite. Jun 3 13:03:41.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:03:41.860: INFO: namespace replicaset-8437 deletion completed in 6.099267496s • [SLOW TEST:16.319 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:03:41.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:03:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6123" for this suite. Jun 3 13:04:28.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:04:28.174: INFO: namespace kubelet-test-6123 deletion completed in 40.093634084s • [SLOW TEST:46.314 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:04:28.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 3 13:04:28.268: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 13:04:28.293: INFO: Waiting for terminating namespaces to be deleted... Jun 3 13:04:28.295: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 3 13:04:28.300: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.300: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 13:04:28.300: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.300: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 13:04:28.300: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 3 13:04:28.306: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.306: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 13:04:28.306: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.306: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 13:04:28.306: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.306: INFO: Container coredns ready: true, restart count 0 Jun 3 13:04:28.306: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 3 13:04:28.306: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 3 13:04:28.392: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 3 13:04:28.392: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 3 13:04:28.392: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 3 13:04:28.392: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 3 13:04:28.392: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 3 13:04:28.392: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-b58b6079-fe31-4034-8052-9da2c222850c.16150a641c2c8d30], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6257/filler-pod-b58b6079-fe31-4034-8052-9da2c222850c to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58b6079-fe31-4034-8052-9da2c222850c.16150a64a7590cc3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58b6079-fe31-4034-8052-9da2c222850c.16150a64e45e4a1d], Reason = [Created], Message = [Created container filler-pod-b58b6079-fe31-4034-8052-9da2c222850c] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58b6079-fe31-4034-8052-9da2c222850c.16150a64f52cdd1a], Reason = [Started], Message = [Started container filler-pod-b58b6079-fe31-4034-8052-9da2c222850c] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b.16150a641acc6296], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6257/filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b.16150a6465a49743], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b.16150a64c922b54b], Reason = [Created], Message = [Created container filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b.16150a64e1563848], Reason = [Started], Message = [Started container filler-pod-dd0c3f05-c31c-465d-b999-22044a074c9b] STEP: Considering event: Type = [Warning], Name = [additional-pod.16150a6583353882], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:04:35.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6257" for this suite. Jun 3 13:04:41.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:04:41.650: INFO: namespace sched-pred-6257 deletion completed in 6.100050379s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.475 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:04:41.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-dc9ac953-b1e2-4066-8cb6-aaa5e4429545 STEP: Creating a pod to test consume secrets Jun 3 13:04:42.003: INFO: Waiting up to 5m0s for pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182" in namespace "secrets-5450" to be "success or failure" Jun 3 13:04:42.083: INFO: Pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182": Phase="Pending", Reason="", readiness=false. Elapsed: 79.8043ms Jun 3 13:04:44.087: INFO: Pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084046194s Jun 3 13:04:46.090: INFO: Pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182": Phase="Running", Reason="", readiness=true. Elapsed: 4.087582468s Jun 3 13:04:48.095: INFO: Pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091706762s STEP: Saw pod success Jun 3 13:04:48.095: INFO: Pod "pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182" satisfied condition "success or failure" Jun 3 13:04:48.097: INFO: Trying to get logs from node iruya-worker pod pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182 container secret-volume-test: STEP: delete the pod Jun 3 13:04:48.176: INFO: Waiting for pod pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182 to disappear Jun 3 13:04:48.187: INFO: Pod pod-secrets-baa051ed-ee8e-49fe-84c4-b2d8f5569182 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:04:48.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5450" for this suite. Jun 3 13:04:54.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:04:54.302: INFO: namespace secrets-5450 deletion completed in 6.11171867s • [SLOW TEST:12.652 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:04:54.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 13:04:58.455: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:04:58.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3489" for this suite. Jun 3 13:05:04.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:05:04.609: INFO: namespace container-runtime-3489 deletion completed in 6.095480663s • [SLOW TEST:10.306 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:05:04.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2469 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2469 to expose endpoints map[] Jun 3 13:05:04.726: INFO: successfully validated that service multi-endpoint-test in namespace services-2469 exposes endpoints map[] (12.420007ms elapsed) STEP: Creating pod pod1 in namespace services-2469 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2469 to expose endpoints map[pod1:[100]] Jun 3 13:05:07.829: INFO: successfully validated that service multi-endpoint-test in namespace services-2469 exposes endpoints map[pod1:[100]] (3.097665848s elapsed) STEP: Creating pod pod2 in namespace services-2469 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2469 to expose endpoints map[pod1:[100] pod2:[101]] Jun 3 13:05:12.119: INFO: successfully validated that service multi-endpoint-test in namespace services-2469 exposes endpoints map[pod1:[100] pod2:[101]] (4.287774195s elapsed) STEP: Deleting pod pod1 in namespace services-2469 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2469 to expose endpoints map[pod2:[101]] Jun 3 13:05:13.147: INFO: successfully validated that service multi-endpoint-test in namespace services-2469 exposes endpoints map[pod2:[101]] (1.0232316s elapsed) STEP: Deleting pod pod2 in namespace services-2469 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2469 to expose endpoints map[] Jun 3 13:05:14.175: INFO: successfully validated that service multi-endpoint-test in namespace services-2469 exposes endpoints map[] (1.022387329s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:05:14.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2469" for this suite. Jun 3 13:05:36.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:05:36.427: INFO: namespace services-2469 deletion completed in 22.091556237s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.818 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:05:36.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 3 13:05:36.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 3 13:05:36.596: INFO: stderr: "" Jun 3 13:05:36.596: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:05:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1091" for this suite. Jun 3 13:05:42.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:05:42.691: INFO: namespace kubectl-1091 deletion completed in 6.09161736s • [SLOW TEST:6.264 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:05:42.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 3 13:05:42.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 3 13:05:42.954: INFO: stderr: "" Jun 3 13:05:42.954: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:05:42.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7776" for this suite. Jun 3 13:05:48.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:05:49.069: INFO: namespace kubectl-7776 deletion completed in 6.091002841s • [SLOW TEST:6.378 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:05:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-6ctd STEP: Creating a pod to test atomic-volume-subpath Jun 3 13:05:49.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6ctd" in namespace "subpath-1537" to be "success or failure" Jun 3 13:05:49.165: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650923ms Jun 3 13:05:51.355: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19352366s Jun 3 13:05:53.368: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 4.206495208s Jun 3 13:05:55.373: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 6.211145012s Jun 3 13:05:57.377: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 8.215214785s Jun 3 13:05:59.382: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 10.220010045s Jun 3 13:06:01.386: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 12.224083925s Jun 3 13:06:03.390: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 14.228367194s Jun 3 13:06:05.395: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 16.232888089s Jun 3 13:06:07.399: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 18.237450002s Jun 3 13:06:09.404: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 20.242108s Jun 3 13:06:11.409: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Running", Reason="", readiness=true. Elapsed: 22.247278323s Jun 3 13:06:13.414: INFO: Pod "pod-subpath-test-configmap-6ctd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.251939003s STEP: Saw pod success Jun 3 13:06:13.414: INFO: Pod "pod-subpath-test-configmap-6ctd" satisfied condition "success or failure" Jun 3 13:06:13.416: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-6ctd container test-container-subpath-configmap-6ctd: STEP: delete the pod Jun 3 13:06:13.452: INFO: Waiting for pod pod-subpath-test-configmap-6ctd to disappear Jun 3 13:06:13.582: INFO: Pod pod-subpath-test-configmap-6ctd no longer exists STEP: Deleting pod pod-subpath-test-configmap-6ctd Jun 3 13:06:13.582: INFO: Deleting pod "pod-subpath-test-configmap-6ctd" in namespace "subpath-1537" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:06:13.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1537" for this suite. Jun 3 13:06:19.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:06:19.678: INFO: namespace subpath-1537 deletion completed in 6.08920441s • [SLOW TEST:30.608 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:06:19.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 13:06:19.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6380' Jun 3 13:06:19.812: INFO: stderr: "" Jun 3 13:06:19.812: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 3 13:06:24.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6380 -o json' Jun 3 13:06:24.960: INFO: stderr: "" Jun 3 13:06:24.960: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-03T13:06:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-6380\",\n \"resourceVersion\": \"14439231\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6380/pods/e2e-test-nginx-pod\",\n \"uid\": \"96eabdbf-fc85-4f7c-951f-dd45384685ee\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c9gfv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c9gfv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c9gfv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-03T13:06:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-03T13:06:22Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-03T13:06:22Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-03T13:06:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a140a060c7b73d5a5b69f322a0ec6a553ddbced1e9879762fee62b17371940b0\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-03T13:06:22Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.200\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-03T13:06:19Z\"\n }\n}\n" STEP: replace the image in the pod Jun 3 13:06:24.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6380' Jun 3 13:06:25.368: INFO: stderr: "" Jun 3 13:06:25.368: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 3 13:06:25.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6380' Jun 3 13:06:31.866: INFO: stderr: "" Jun 3 13:06:31.866: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:06:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6380" for this suite. Jun 3 13:06:37.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:06:37.966: INFO: namespace kubectl-6380 deletion completed in 6.096216459s • [SLOW TEST:18.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:06:37.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 3 13:06:38.064: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 3 13:06:47.153: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:06:47.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6305" for this suite. Jun 3 13:06:53.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:06:53.264: INFO: namespace pods-6305 deletion completed in 6.103544928s • [SLOW TEST:15.298 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:06:53.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 13:06:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4442' Jun 3 13:06:53.441: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 3 13:06:53.441: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 3 13:06:55.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4442' Jun 3 13:06:55.642: INFO: stderr: "" Jun 3 13:06:55.642: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:06:55.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4442" for this suite. Jun 3 13:07:17.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:07:17.758: INFO: namespace kubectl-4442 deletion completed in 22.11200689s • [SLOW TEST:24.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:07:17.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 3 13:07:22.396: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1730ffbf-ef0c-4c3f-b4a5-e4bdff0ec294" Jun 3 13:07:22.396: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1730ffbf-ef0c-4c3f-b4a5-e4bdff0ec294" in namespace "pods-7554" to be "terminated due to deadline exceeded" Jun 3 13:07:22.406: INFO: Pod "pod-update-activedeadlineseconds-1730ffbf-ef0c-4c3f-b4a5-e4bdff0ec294": Phase="Running", Reason="", readiness=true. Elapsed: 9.960387ms Jun 3 13:07:24.413: INFO: Pod "pod-update-activedeadlineseconds-1730ffbf-ef0c-4c3f-b4a5-e4bdff0ec294": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01700914s Jun 3 13:07:24.413: INFO: Pod "pod-update-activedeadlineseconds-1730ffbf-ef0c-4c3f-b4a5-e4bdff0ec294" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:07:24.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7554" for this suite. Jun 3 13:07:30.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:07:30.515: INFO: namespace pods-7554 deletion completed in 6.097840285s • [SLOW TEST:12.756 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:07:30.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3aeb754b-10f2-412f-be7e-6b4040bfd74c in namespace container-probe-3902 Jun 3 13:07:34.627: INFO: Started pod busybox-3aeb754b-10f2-412f-be7e-6b4040bfd74c in namespace container-probe-3902 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 13:07:34.630: INFO: Initial restart count of pod busybox-3aeb754b-10f2-412f-be7e-6b4040bfd74c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:11:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3902" for this suite. Jun 3 13:11:41.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:11:41.360: INFO: namespace container-probe-3902 deletion completed in 6.140768491s • [SLOW TEST:250.844 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:11:41.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:11:41.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9" in namespace "downward-api-9191" to be "success or failure" Jun 3 13:11:41.451: INFO: Pod "downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.572224ms Jun 3 13:11:43.455: INFO: Pod "downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020253195s Jun 3 13:11:45.460: INFO: Pod "downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025762766s STEP: Saw pod success Jun 3 13:11:45.460: INFO: Pod "downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9" satisfied condition "success or failure" Jun 3 13:11:45.464: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9 container client-container: STEP: delete the pod Jun 3 13:11:45.482: INFO: Waiting for pod downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9 to disappear Jun 3 13:11:45.518: INFO: Pod downwardapi-volume-618e34a8-116d-43bf-93f1-ed81d597e5a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:11:45.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9191" for this suite. Jun 3 13:11:51.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:11:51.661: INFO: namespace downward-api-9191 deletion completed in 6.138734955s • [SLOW TEST:10.301 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:11:51.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 3 13:11:51.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6646,SelfLink:/api/v1/namespaces/watch-6646/configmaps/e2e-watch-test-resource-version,UID:c1cbff75-102a-4785-951a-0ccc7f069dd1,ResourceVersion:14440039,Generation:0,CreationTimestamp:2020-06-03 13:11:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 3 13:11:51.792: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6646,SelfLink:/api/v1/namespaces/watch-6646/configmaps/e2e-watch-test-resource-version,UID:c1cbff75-102a-4785-951a-0ccc7f069dd1,ResourceVersion:14440040,Generation:0,CreationTimestamp:2020-06-03 13:11:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:11:51.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6646" for this suite. Jun 3 13:11:57.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:11:57.889: INFO: namespace watch-6646 deletion completed in 6.093398005s • [SLOW TEST:6.226 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:11:57.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:11:58.027: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 3 13:12:03.033: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 13:12:03.033: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 3 13:12:03.083: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-732,SelfLink:/apis/apps/v1/namespaces/deployment-732/deployments/test-cleanup-deployment,UID:1d440a08-5f20-4965-b1fe-1c5e91f96ccc,ResourceVersion:14440083,Generation:1,CreationTimestamp:2020-06-03 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 3 13:12:03.099: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-732,SelfLink:/apis/apps/v1/namespaces/deployment-732/replicasets/test-cleanup-deployment-55bbcbc84c,UID:0066564d-ba25-4b11-96cf-a2466e02f871,ResourceVersion:14440085,Generation:1,CreationTimestamp:2020-06-03 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1d440a08-5f20-4965-b1fe-1c5e91f96ccc 0xc0021f8117 0xc0021f8118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 13:12:03.099: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 3 13:12:03.100: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-732,SelfLink:/apis/apps/v1/namespaces/deployment-732/replicasets/test-cleanup-controller,UID:c07996a1-8bd6-4579-8fdb-5fa2d00ed76c,ResourceVersion:14440084,Generation:1,CreationTimestamp:2020-06-03 13:11:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1d440a08-5f20-4965-b1fe-1c5e91f96ccc 0xc0021f802f 0xc0021f8040}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 3 13:12:03.163: INFO: Pod "test-cleanup-controller-z2f95" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-z2f95,GenerateName:test-cleanup-controller-,Namespace:deployment-732,SelfLink:/api/v1/namespaces/deployment-732/pods/test-cleanup-controller-z2f95,UID:724f8b3e-1081-46cf-a09d-ab4ed97c11ce,ResourceVersion:14440079,Generation:0,CreationTimestamp:2020-06-03 13:11:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c07996a1-8bd6-4579-8fdb-5fa2d00ed76c 0xc0021f89d7 0xc0021f89d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c562t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c562t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c562t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021f8a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021f8a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:11:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:12:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:12:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:11:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.203,StartTime:2020-06-03 13:11:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:12:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://acbb613e95469c2d389689ccfedcdc54322c51b7136d5973835a4dc86110eef0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:12:03.163: INFO: Pod "test-cleanup-deployment-55bbcbc84c-nsm2s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-nsm2s,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-732,SelfLink:/api/v1/namespaces/deployment-732/pods/test-cleanup-deployment-55bbcbc84c-nsm2s,UID:13cb2745-698b-4e32-b20d-f5c29d0458ac,ResourceVersion:14440091,Generation:0,CreationTimestamp:2020-06-03 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 0066564d-ba25-4b11-96cf-a2466e02f871 0xc0021f8b57 0xc0021f8b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c562t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c562t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c562t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021f8be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021f8c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:12:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:12:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-732" for this suite. Jun 3 13:12:09.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:12:09.323: INFO: namespace deployment-732 deletion completed in 6.150141188s • [SLOW TEST:11.433 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:12:09.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 13:12:09.464: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:09.491: INFO: Number of nodes with available pods: 0 Jun 3 13:12:09.491: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:12:10.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:10.500: INFO: Number of nodes with available pods: 0 Jun 3 13:12:10.500: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:12:11.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:11.505: INFO: Number of nodes with available pods: 0 Jun 3 13:12:11.505: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:12:12.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:12.499: INFO: Number of nodes with available pods: 0 Jun 3 13:12:12.499: INFO: Node iruya-worker is running more than one daemon pod Jun 3 13:12:13.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:13.500: INFO: Number of nodes with available pods: 1 Jun 3 13:12:13.500: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 13:12:14.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:14.500: INFO: Number of nodes with available pods: 2 Jun 3 13:12:14.500: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 3 13:12:14.555: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:14.614: INFO: Number of nodes with available pods: 1 Jun 3 13:12:14.614: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 13:12:15.619: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:15.623: INFO: Number of nodes with available pods: 1 Jun 3 13:12:15.623: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 13:12:16.618: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:16.621: INFO: Number of nodes with available pods: 1 Jun 3 13:12:16.621: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 13:12:17.619: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 13:12:17.621: INFO: Number of nodes with available pods: 2 Jun 3 13:12:17.621: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6513, will wait for the garbage collector to delete the pods Jun 3 13:12:17.686: INFO: Deleting DaemonSet.extensions daemon-set took: 7.354293ms Jun 3 13:12:17.986: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.343242ms Jun 3 13:12:31.890: INFO: Number of nodes with available pods: 0 Jun 3 13:12:31.890: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 13:12:31.893: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6513/daemonsets","resourceVersion":"14440237"},"items":null} Jun 3 13:12:31.910: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6513/pods","resourceVersion":"14440237"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:12:31.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6513" for this suite. Jun 3 13:12:37.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:12:38.033: INFO: namespace daemonsets-6513 deletion completed in 6.108774272s • [SLOW TEST:28.709 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:12:38.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 3 13:12:38.084: INFO: PodSpec: initContainers in spec.initContainers Jun 3 13:13:26.731: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f44acd4f-7f15-4ca5-8788-2f5010700a76", GenerateName:"", Namespace:"init-container-6430", SelfLink:"/api/v1/namespaces/init-container-6430/pods/pod-init-f44acd4f-7f15-4ca5-8788-2f5010700a76", UID:"e52833a6-261e-4a43-916d-1e7a0d325b9c", ResourceVersion:"14440401", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726786758, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"84038304"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-l2nz9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000c7c1c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l2nz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l2nz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l2nz9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026fe088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00280a060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026fe110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026fe130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026fe138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026fe13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726786758, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726786758, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726786758, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726786758, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.206", StartTime:(*v1.Time)(0xc0011b6060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025fe070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025fe0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1dfa641669610a7be69ea4d1f61556ffa489c903a20231c279282d57e8792be0"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011b60a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011b6080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:13:26.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6430" for this suite. Jun 3 13:13:48.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:13:48.884: INFO: namespace init-container-6430 deletion completed in 22.091908891s • [SLOW TEST:70.850 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:13:48.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0603 13:14:01.113504 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 13:14:01.113: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:14:01.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8000" for this suite. Jun 3 13:14:09.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:14:09.236: INFO: namespace gc-8000 deletion completed in 8.111134482s • [SLOW TEST:20.350 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:14:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 3 13:14:09.376: INFO: Waiting up to 5m0s for pod "downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3" in namespace "downward-api-8034" to be "success or failure" Jun 3 13:14:09.438: INFO: Pod "downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 62.775924ms Jun 3 13:14:11.506: INFO: Pod "downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130179263s Jun 3 13:14:13.511: INFO: Pod "downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134963348s STEP: Saw pod success Jun 3 13:14:13.511: INFO: Pod "downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3" satisfied condition "success or failure" Jun 3 13:14:13.514: INFO: Trying to get logs from node iruya-worker pod downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3 container dapi-container: STEP: delete the pod Jun 3 13:14:13.535: INFO: Waiting for pod downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3 to disappear Jun 3 13:14:13.559: INFO: Pod downward-api-f1d129df-dd17-4453-a512-ed5e43c4c0b3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:14:13.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8034" for this suite. Jun 3 13:14:19.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:14:19.663: INFO: namespace downward-api-8034 deletion completed in 6.099648232s • [SLOW TEST:10.427 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:14:19.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 13:14:27.772: INFO: DNS probes using dns-test-b3f1fe3c-cd8c-46d1-ad85-cf11fad02cd0 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 13:14:35.946: INFO: File wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:35.949: INFO: File jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:35.949: INFO: Lookups using dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd failed for: [wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local] Jun 3 13:14:40.955: INFO: File wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:40.959: INFO: File jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:40.959: INFO: Lookups using dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd failed for: [wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local] Jun 3 13:14:45.954: INFO: File wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:45.957: INFO: File jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:45.958: INFO: Lookups using dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd failed for: [wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local] Jun 3 13:14:50.954: INFO: File wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:50.958: INFO: File jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:50.958: INFO: Lookups using dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd failed for: [wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local] Jun 3 13:14:55.984: INFO: File wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:55.989: INFO: File jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local from pod dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 3 13:14:55.989: INFO: Lookups using dns-6959/dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd failed for: [wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local] Jun 3 13:15:00.959: INFO: DNS probes using dns-test-7567275e-95c3-4b4e-ae8c-da256e98bbfd succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6959.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6959.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 13:15:09.680: INFO: DNS probes using dns-test-21680b9e-f7d8-497e-b5f5-02e4a083483c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:15:09.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6959" for this suite. Jun 3 13:15:15.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:15:15.883: INFO: namespace dns-6959 deletion completed in 6.083903204s • [SLOW TEST:56.220 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:15:15.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:15:16.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671" in namespace "downward-api-6119" to be "success or failure" Jun 3 13:15:16.030: INFO: Pod "downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.598266ms Jun 3 13:15:18.035: INFO: Pod "downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009373383s Jun 3 13:15:20.039: INFO: Pod "downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01345846s STEP: Saw pod success Jun 3 13:15:20.039: INFO: Pod "downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671" satisfied condition "success or failure" Jun 3 13:15:20.041: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671 container client-container: STEP: delete the pod Jun 3 13:15:20.079: INFO: Waiting for pod downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671 to disappear Jun 3 13:15:20.107: INFO: Pod downwardapi-volume-d9d8a1ca-e7cd-4f77-99f6-4277db165671 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:15:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6119" for this suite. Jun 3 13:15:26.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:15:26.248: INFO: namespace downward-api-6119 deletion completed in 6.137344229s • [SLOW TEST:10.364 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:15:26.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:15:52.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2717" for this suite. Jun 3 13:15:58.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:15:58.660: INFO: namespace namespaces-2717 deletion completed in 6.112500132s STEP: Destroying namespace "nsdeletetest-5600" for this suite. Jun 3 13:15:58.663: INFO: Namespace nsdeletetest-5600 was already deleted STEP: Destroying namespace "nsdeletetest-9419" for this suite. Jun 3 13:16:04.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:16:04.756: INFO: namespace nsdeletetest-9419 deletion completed in 6.093230806s • [SLOW TEST:38.508 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:16:04.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 3 13:16:04.816: INFO: Waiting up to 5m0s for pod "pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150" in namespace "emptydir-7367" to be "success or failure" Jun 3 13:16:04.820: INFO: Pod "pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332519ms Jun 3 13:16:06.824: INFO: Pod "pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008566786s Jun 3 13:16:08.829: INFO: Pod "pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01338142s STEP: Saw pod success Jun 3 13:16:08.829: INFO: Pod "pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150" satisfied condition "success or failure" Jun 3 13:16:08.832: INFO: Trying to get logs from node iruya-worker pod pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150 container test-container: STEP: delete the pod Jun 3 13:16:08.867: INFO: Waiting for pod pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150 to disappear Jun 3 13:16:08.874: INFO: Pod pod-5366de0d-0b7d-4f94-b75b-8d06b57b7150 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:16:08.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7367" for this suite. Jun 3 13:16:14.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:16:14.965: INFO: namespace emptydir-7367 deletion completed in 6.08871591s • [SLOW TEST:10.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:16:14.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 3 13:16:15.109: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:16:24.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2213" for this suite. Jun 3 13:16:30.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:16:30.281: INFO: namespace init-container-2213 deletion completed in 6.09564951s • [SLOW TEST:15.315 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:16:30.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 13:16:30.357: INFO: Waiting up to 5m0s for pod "pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a" in namespace "emptydir-187" to be "success or failure" Jun 3 13:16:30.376: INFO: Pod "pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.52314ms Jun 3 13:16:32.379: INFO: Pod "pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022396486s Jun 3 13:16:34.386: INFO: Pod "pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028757417s STEP: Saw pod success Jun 3 13:16:34.386: INFO: Pod "pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a" satisfied condition "success or failure" Jun 3 13:16:34.390: INFO: Trying to get logs from node iruya-worker2 pod pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a container test-container: STEP: delete the pod Jun 3 13:16:34.557: INFO: Waiting for pod pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a to disappear Jun 3 13:16:34.611: INFO: Pod pod-3bda83c3-3b13-42f7-b2b2-4f6873da355a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:16:34.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-187" for this suite. Jun 3 13:16:40.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:16:40.720: INFO: namespace emptydir-187 deletion completed in 6.106041412s • [SLOW TEST:10.439 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:16:40.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 3 13:16:40.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8467,SelfLink:/api/v1/namespaces/watch-8467/configmaps/e2e-watch-test-watch-closed,UID:6da02913-b5b6-4b50-bc4e-5cc09314b5f6,ResourceVersion:14441278,Generation:0,CreationTimestamp:2020-06-03 13:16:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 3 13:16:40.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8467,SelfLink:/api/v1/namespaces/watch-8467/configmaps/e2e-watch-test-watch-closed,UID:6da02913-b5b6-4b50-bc4e-5cc09314b5f6,ResourceVersion:14441279,Generation:0,CreationTimestamp:2020-06-03 13:16:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 3 13:16:40.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8467,SelfLink:/api/v1/namespaces/watch-8467/configmaps/e2e-watch-test-watch-closed,UID:6da02913-b5b6-4b50-bc4e-5cc09314b5f6,ResourceVersion:14441280,Generation:0,CreationTimestamp:2020-06-03 13:16:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 3 13:16:40.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8467,SelfLink:/api/v1/namespaces/watch-8467/configmaps/e2e-watch-test-watch-closed,UID:6da02913-b5b6-4b50-bc4e-5cc09314b5f6,ResourceVersion:14441281,Generation:0,CreationTimestamp:2020-06-03 13:16:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:16:40.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8467" for this suite. Jun 3 13:16:46.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:16:46.941: INFO: namespace watch-8467 deletion completed in 6.104010394s • [SLOW TEST:6.220 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:16:46.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 3 13:16:46.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7062' Jun 3 13:16:51.230: INFO: stderr: "" Jun 3 13:16:51.230: INFO: stdout: "pod/pause created\n" Jun 3 13:16:51.230: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 3 13:16:51.230: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7062" to be "running and ready" Jun 3 13:16:51.424: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 194.157721ms Jun 3 13:16:53.429: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198930915s Jun 3 13:16:55.433: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.203121058s Jun 3 13:16:55.433: INFO: Pod "pause" satisfied condition "running and ready" Jun 3 13:16:55.433: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 3 13:16:55.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7062' Jun 3 13:16:55.533: INFO: stderr: "" Jun 3 13:16:55.533: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 3 13:16:55.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7062' Jun 3 13:16:55.652: INFO: stderr: "" Jun 3 13:16:55.652: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 3 13:16:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7062' Jun 3 13:16:55.739: INFO: stderr: "" Jun 3 13:16:55.739: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 3 13:16:55.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7062' Jun 3 13:16:55.838: INFO: stderr: "" Jun 3 13:16:55.838: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 3 13:16:55.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7062' Jun 3 13:16:55.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:16:55.948: INFO: stdout: "pod \"pause\" force deleted\n" Jun 3 13:16:55.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7062' Jun 3 13:16:56.040: INFO: stderr: "No resources found.\n" Jun 3 13:16:56.040: INFO: stdout: "" Jun 3 13:16:56.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7062 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 13:16:56.133: INFO: stderr: "" Jun 3 13:16:56.133: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:16:56.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7062" for this suite. Jun 3 13:17:02.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:02.334: INFO: namespace kubectl-7062 deletion completed in 6.197724795s • [SLOW TEST:15.393 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:02.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:17:02.385: INFO: Creating deployment "test-recreate-deployment" Jun 3 13:17:02.394: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 3 13:17:02.406: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 3 13:17:04.413: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 3 13:17:04.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787022, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787022, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787022, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787022, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 13:17:06.419: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 3 13:17:06.428: INFO: Updating deployment test-recreate-deployment Jun 3 13:17:06.428: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 3 13:17:06.752: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8699,SelfLink:/apis/apps/v1/namespaces/deployment-8699/deployments/test-recreate-deployment,UID:75078b27-8730-47e1-9e8f-e4088465c8c6,ResourceVersion:14441407,Generation:2,CreationTimestamp:2020-06-03 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-03 13:17:06 +0000 UTC 2020-06-03 13:17:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-03 13:17:06 +0000 UTC 2020-06-03 13:17:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 3 13:17:06.787: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8699,SelfLink:/apis/apps/v1/namespaces/deployment-8699/replicasets/test-recreate-deployment-5c8c9cc69d,UID:18f216e2-74fa-40b4-928f-d8225541f20d,ResourceVersion:14441406,Generation:1,CreationTimestamp:2020-06-03 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 75078b27-8730-47e1-9e8f-e4088465c8c6 0xc00299b7f7 0xc00299b7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 13:17:06.787: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 3 13:17:06.787: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8699,SelfLink:/apis/apps/v1/namespaces/deployment-8699/replicasets/test-recreate-deployment-6df85df6b9,UID:807de95f-6bb3-41b0-9fc5-eaa8e56a3cf5,ResourceVersion:14441397,Generation:2,CreationTimestamp:2020-06-03 13:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 75078b27-8730-47e1-9e8f-e4088465c8c6 0xc00299b8c7 0xc00299b8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 13:17:06.846: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kd9r6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kd9r6,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8699,SelfLink:/api/v1/namespaces/deployment-8699/pods/test-recreate-deployment-5c8c9cc69d-kd9r6,UID:1f754ea9-6436-44db-a6b8-baf0bafeb795,ResourceVersion:14441409,Generation:0,CreationTimestamp:2020-06-03 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 18f216e2-74fa-40b4-928f-d8225541f20d 0xc0030d8197 0xc0030d8198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fjg96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fjg96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fjg96 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030d8210} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030d8230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:17:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:17:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:17:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-03 13:17:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:17:06.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8699" for this suite. Jun 3 13:17:12.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:13.043: INFO: namespace deployment-8699 deletion completed in 6.163639623s • [SLOW TEST:10.708 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:13.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 13:17:13.147: INFO: Waiting up to 5m0s for pod "pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97" in namespace "emptydir-4076" to be "success or failure" Jun 3 13:17:13.151: INFO: Pod "pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798934ms Jun 3 13:17:15.157: INFO: Pod "pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010245831s Jun 3 13:17:17.163: INFO: Pod "pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016233395s STEP: Saw pod success Jun 3 13:17:17.163: INFO: Pod "pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97" satisfied condition "success or failure" Jun 3 13:17:17.166: INFO: Trying to get logs from node iruya-worker pod pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97 container test-container: STEP: delete the pod Jun 3 13:17:17.182: INFO: Waiting for pod pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97 to disappear Jun 3 13:17:17.226: INFO: Pod pod-7de2b18a-8202-492f-a0eb-ddfcdf5dda97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:17:17.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4076" for this suite. Jun 3 13:17:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:23.343: INFO: namespace emptydir-4076 deletion completed in 6.113638053s • [SLOW TEST:10.300 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:23.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 3 13:17:23.411: INFO: Waiting up to 5m0s for pod "client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9" in namespace "containers-838" to be "success or failure" Jun 3 13:17:23.415: INFO: Pod "client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365123ms Jun 3 13:17:25.467: INFO: Pod "client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055513384s Jun 3 13:17:27.496: INFO: Pod "client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084393806s STEP: Saw pod success Jun 3 13:17:27.496: INFO: Pod "client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9" satisfied condition "success or failure" Jun 3 13:17:27.498: INFO: Trying to get logs from node iruya-worker2 pod client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9 container test-container: STEP: delete the pod Jun 3 13:17:27.515: INFO: Waiting for pod client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9 to disappear Jun 3 13:17:27.526: INFO: Pod client-containers-52e22b19-1a56-431c-a165-634ccdf2b5d9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:17:27.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-838" for this suite. Jun 3 13:17:33.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:33.638: INFO: namespace containers-838 deletion completed in 6.110021089s • [SLOW TEST:10.294 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:33.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 3 13:17:33.715: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 3 13:17:34.151: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 3 13:17:36.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 13:17:38.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726787054, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 13:17:41.190: INFO: Waited 723.297193ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:17:41.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4316" for this suite. Jun 3 13:17:47.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:48.032: INFO: namespace aggregator-4316 deletion completed in 6.405116871s • [SLOW TEST:14.394 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:48.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bb22fd8d-d41a-4024-8965-61b27b4b05bf STEP: Creating a pod to test consume secrets Jun 3 13:17:48.139: INFO: Waiting up to 5m0s for pod "pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3" in namespace "secrets-8933" to be "success or failure" Jun 3 13:17:48.150: INFO: Pod "pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653689ms Jun 3 13:17:50.154: INFO: Pod "pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014635799s Jun 3 13:17:52.158: INFO: Pod "pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018843617s STEP: Saw pod success Jun 3 13:17:52.158: INFO: Pod "pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3" satisfied condition "success or failure" Jun 3 13:17:52.161: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3 container secret-env-test: STEP: delete the pod Jun 3 13:17:52.197: INFO: Waiting for pod pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3 to disappear Jun 3 13:17:52.210: INFO: Pod pod-secrets-87574f19-a03f-4dd4-b32d-8d76b9a8b0c3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:17:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8933" for this suite. Jun 3 13:17:58.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:17:58.300: INFO: namespace secrets-8933 deletion completed in 6.086593539s • [SLOW TEST:10.268 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:17:58.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 3 13:17:58.398: INFO: Waiting up to 5m0s for pod "var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8" in namespace "var-expansion-3290" to be "success or failure" Jun 3 13:17:58.402: INFO: Pod "var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075196ms Jun 3 13:18:00.406: INFO: Pod "var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008275585s Jun 3 13:18:02.411: INFO: Pod "var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012896238s STEP: Saw pod success Jun 3 13:18:02.411: INFO: Pod "var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8" satisfied condition "success or failure" Jun 3 13:18:02.414: INFO: Trying to get logs from node iruya-worker pod var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8 container dapi-container: STEP: delete the pod Jun 3 13:18:02.451: INFO: Waiting for pod var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8 to disappear Jun 3 13:18:02.453: INFO: Pod var-expansion-f9268221-e812-4d9f-a143-08a83b6090d8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:18:02.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3290" for this suite. Jun 3 13:18:08.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:18:08.547: INFO: namespace var-expansion-3290 deletion completed in 6.09042226s • [SLOW TEST:10.246 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:18:08.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:18:08.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 3 13:18:08.742: INFO: stderr: "" Jun 3 13:18:08.742: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:18:08.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8677" for this suite. Jun 3 13:18:14.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:18:14.828: INFO: namespace kubectl-8677 deletion completed in 6.081348719s • [SLOW TEST:6.281 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:18:14.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 3 13:18:14.871: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:18:22.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4696" for this suite. Jun 3 13:18:28.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:18:28.569: INFO: namespace init-container-4696 deletion completed in 6.100722613s • [SLOW TEST:13.740 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:18:28.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0603 13:19:08.852635 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 13:19:08.852: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:19:08.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8959" for this suite. Jun 3 13:19:18.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:19:18.951: INFO: namespace gc-8959 deletion completed in 10.095426109s • [SLOW TEST:50.382 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:19:18.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 3 13:19:19.015: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 13:19:19.031: INFO: Waiting for terminating namespaces to be deleted... Jun 3 13:19:19.034: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 3 13:19:19.039: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.039: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 13:19:19.039: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.039: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 13:19:19.039: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 3 13:19:19.044: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.044: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 13:19:19.044: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.044: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 13:19:19.044: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.044: INFO: Container coredns ready: true, restart count 0 Jun 3 13:19:19.044: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 3 13:19:19.044: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16150b3379b71147], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:19:20.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4179" for this suite. Jun 3 13:19:26.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:19:26.193: INFO: namespace sched-pred-4179 deletion completed in 6.125640574s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.242 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:19:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 3 13:19:26.274: INFO: Waiting up to 5m0s for pod "pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78" in namespace "emptydir-36" to be "success or failure" Jun 3 13:19:26.286: INFO: Pod "pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094891ms Jun 3 13:19:28.372: INFO: Pod "pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098617973s Jun 3 13:19:30.402: INFO: Pod "pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128836056s STEP: Saw pod success Jun 3 13:19:30.402: INFO: Pod "pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78" satisfied condition "success or failure" Jun 3 13:19:30.405: INFO: Trying to get logs from node iruya-worker pod pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78 container test-container: STEP: delete the pod Jun 3 13:19:30.531: INFO: Waiting for pod pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78 to disappear Jun 3 13:19:30.574: INFO: Pod pod-c34f5aff-45aa-4efc-b3fd-b424676d8c78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:19:30.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-36" for this suite. Jun 3 13:19:36.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:19:36.698: INFO: namespace emptydir-36 deletion completed in 6.120291706s • [SLOW TEST:10.505 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:19:36.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 3 13:19:37.531: INFO: Pod name wrapped-volume-race-bea2335a-6d5f-4b40-a4e7-7efc50d2ac27: Found 0 pods out of 5 Jun 3 13:19:42.539: INFO: Pod name wrapped-volume-race-bea2335a-6d5f-4b40-a4e7-7efc50d2ac27: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bea2335a-6d5f-4b40-a4e7-7efc50d2ac27 in namespace emptydir-wrapper-7449, will wait for the garbage collector to delete the pods Jun 3 13:19:56.758: INFO: Deleting ReplicationController wrapped-volume-race-bea2335a-6d5f-4b40-a4e7-7efc50d2ac27 took: 7.662115ms Jun 3 13:19:57.058: INFO: Terminating ReplicationController wrapped-volume-race-bea2335a-6d5f-4b40-a4e7-7efc50d2ac27 pods took: 300.264594ms STEP: Creating RC which spawns configmap-volume pods Jun 3 13:20:43.288: INFO: Pod name wrapped-volume-race-78d28460-e534-4803-b611-b62104d88169: Found 0 pods out of 5 Jun 3 13:20:48.295: INFO: Pod name wrapped-volume-race-78d28460-e534-4803-b611-b62104d88169: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-78d28460-e534-4803-b611-b62104d88169 in namespace emptydir-wrapper-7449, will wait for the garbage collector to delete the pods Jun 3 13:21:02.414: INFO: Deleting ReplicationController wrapped-volume-race-78d28460-e534-4803-b611-b62104d88169 took: 8.285548ms Jun 3 13:21:02.714: INFO: Terminating ReplicationController wrapped-volume-race-78d28460-e534-4803-b611-b62104d88169 pods took: 300.27166ms STEP: Creating RC which spawns configmap-volume pods Jun 3 13:21:43.266: INFO: Pod name wrapped-volume-race-08919339-19c1-431e-8051-52a6270f9694: Found 0 pods out of 5 Jun 3 13:21:48.273: INFO: Pod name wrapped-volume-race-08919339-19c1-431e-8051-52a6270f9694: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08919339-19c1-431e-8051-52a6270f9694 in namespace emptydir-wrapper-7449, will wait for the garbage collector to delete the pods Jun 3 13:22:02.448: INFO: Deleting ReplicationController wrapped-volume-race-08919339-19c1-431e-8051-52a6270f9694 took: 8.105772ms Jun 3 13:22:02.749: INFO: Terminating ReplicationController wrapped-volume-race-08919339-19c1-431e-8051-52a6270f9694 pods took: 300.674543ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:22:43.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7449" for this suite. Jun 3 13:22:51.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:22:51.146: INFO: namespace emptydir-wrapper-7449 deletion completed in 8.122993705s • [SLOW TEST:194.447 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:22:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 3 13:22:51.222: INFO: Waiting up to 5m0s for pod "pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8" in namespace "emptydir-5302" to be "success or failure" Jun 3 13:22:51.239: INFO: Pod "pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.505838ms Jun 3 13:22:53.243: INFO: Pod "pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020374701s Jun 3 13:22:55.247: INFO: Pod "pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024511779s STEP: Saw pod success Jun 3 13:22:55.247: INFO: Pod "pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8" satisfied condition "success or failure" Jun 3 13:22:55.250: INFO: Trying to get logs from node iruya-worker2 pod pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8 container test-container: STEP: delete the pod Jun 3 13:22:55.395: INFO: Waiting for pod pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8 to disappear Jun 3 13:22:55.407: INFO: Pod pod-786fc8e5-1f6c-4c97-baf4-3ca98022c3a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:22:55.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5302" for this suite. Jun 3 13:23:01.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:23:01.501: INFO: namespace emptydir-5302 deletion completed in 6.090723091s • [SLOW TEST:10.355 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:23:01.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 13:23:01.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9085' Jun 3 13:23:01.719: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 3 13:23:01.719: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 3 13:23:01.802: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-w77tl] Jun 3 13:23:01.802: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-w77tl" in namespace "kubectl-9085" to be "running and ready" Jun 3 13:23:01.804: INFO: Pod "e2e-test-nginx-rc-w77tl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519752ms Jun 3 13:23:03.809: INFO: Pod "e2e-test-nginx-rc-w77tl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007401764s Jun 3 13:23:05.814: INFO: Pod "e2e-test-nginx-rc-w77tl": Phase="Running", Reason="", readiness=true. Elapsed: 4.01164162s Jun 3 13:23:05.814: INFO: Pod "e2e-test-nginx-rc-w77tl" satisfied condition "running and ready" Jun 3 13:23:05.814: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-w77tl] Jun 3 13:23:05.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9085' Jun 3 13:23:05.940: INFO: stderr: "" Jun 3 13:23:05.940: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 3 13:23:05.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9085' Jun 3 13:23:06.045: INFO: stderr: "" Jun 3 13:23:06.045: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:23:06.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9085" for this suite. Jun 3 13:23:12.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:23:12.148: INFO: namespace kubectl-9085 deletion completed in 6.094070045s • [SLOW TEST:10.646 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:23:12.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b6a0d305-01ae-4d8c-b791-7fea6314f0c4 STEP: Creating a pod to test consume configMaps Jun 3 13:23:12.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d" in namespace "configmap-2951" to be "success or failure" Jun 3 13:23:12.287: INFO: Pod "pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.261108ms Jun 3 13:23:14.291: INFO: Pod "pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016327954s Jun 3 13:23:16.295: INFO: Pod "pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020797298s STEP: Saw pod success Jun 3 13:23:16.295: INFO: Pod "pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d" satisfied condition "success or failure" Jun 3 13:23:16.299: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d container configmap-volume-test: STEP: delete the pod Jun 3 13:23:16.506: INFO: Waiting for pod pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d to disappear Jun 3 13:23:16.634: INFO: Pod pod-configmaps-7422b72f-0533-4eb1-beca-fbc0fbdfe34d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:23:16.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2951" for this suite. Jun 3 13:23:22.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:23:22.748: INFO: namespace configmap-2951 deletion completed in 6.109838274s • [SLOW TEST:10.600 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:23:22.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 3 13:23:22.841: INFO: Waiting up to 5m0s for pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d" in namespace "containers-3190" to be "success or failure" Jun 3 13:23:22.866: INFO: Pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.205875ms Jun 3 13:23:25.240: INFO: Pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399553032s Jun 3 13:23:27.245: INFO: Pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404177161s Jun 3 13:23:29.250: INFO: Pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.409428397s STEP: Saw pod success Jun 3 13:23:29.250: INFO: Pod "client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d" satisfied condition "success or failure" Jun 3 13:23:29.254: INFO: Trying to get logs from node iruya-worker2 pod client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d container test-container: STEP: delete the pod Jun 3 13:23:29.277: INFO: Waiting for pod client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d to disappear Jun 3 13:23:29.281: INFO: Pod client-containers-54ec31aa-3707-43ae-a96e-2db703cbca9d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:23:29.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3190" for this suite. Jun 3 13:23:35.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:23:35.381: INFO: namespace containers-3190 deletion completed in 6.096577163s • [SLOW TEST:12.632 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:23:35.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:23:35.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7443" for this suite. Jun 3 13:23:57.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:23:57.604: INFO: namespace pods-7443 deletion completed in 22.114573827s • [SLOW TEST:22.222 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:23:57.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4fb8fb0d-f5bb-476f-abee-41a19173e142 STEP: Creating a pod to test consume secrets Jun 3 13:23:57.677: INFO: Waiting up to 5m0s for pod "pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1" in namespace "secrets-3525" to be "success or failure" Jun 3 13:23:57.683: INFO: Pod "pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821152ms Jun 3 13:23:59.747: INFO: Pod "pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06944195s Jun 3 13:24:01.752: INFO: Pod "pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07451429s STEP: Saw pod success Jun 3 13:24:01.752: INFO: Pod "pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1" satisfied condition "success or failure" Jun 3 13:24:01.756: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1 container secret-volume-test: STEP: delete the pod Jun 3 13:24:01.798: INFO: Waiting for pod pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1 to disappear Jun 3 13:24:01.824: INFO: Pod pod-secrets-7772e83f-c080-4995-acb2-d7a966b4b9a1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:24:01.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3525" for this suite. Jun 3 13:24:07.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:24:07.921: INFO: namespace secrets-3525 deletion completed in 6.092227546s • [SLOW TEST:10.317 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:24:07.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7554.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 13:24:14.051: INFO: DNS probes using dns-7554/dns-test-c17f98fe-c330-4ebf-984a-d55da75318b6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:24:14.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7554" for this suite. Jun 3 13:24:20.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:24:20.273: INFO: namespace dns-7554 deletion completed in 6.155394972s • [SLOW TEST:12.351 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:24:20.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:24:24.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9825" for this suite. Jun 3 13:25:04.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:25:04.453: INFO: namespace kubelet-test-9825 deletion completed in 40.075289635s • [SLOW TEST:44.180 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:25:04.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 13:25:04.517: INFO: Waiting up to 5m0s for pod "pod-e2760e59-ba28-4443-a9f0-e1e2862812f2" in namespace "emptydir-706" to be "success or failure" Jun 3 13:25:04.531: INFO: Pod "pod-e2760e59-ba28-4443-a9f0-e1e2862812f2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.968102ms Jun 3 13:25:06.535: INFO: Pod "pod-e2760e59-ba28-4443-a9f0-e1e2862812f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017621579s Jun 3 13:25:08.540: INFO: Pod "pod-e2760e59-ba28-4443-a9f0-e1e2862812f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022546915s STEP: Saw pod success Jun 3 13:25:08.540: INFO: Pod "pod-e2760e59-ba28-4443-a9f0-e1e2862812f2" satisfied condition "success or failure" Jun 3 13:25:08.543: INFO: Trying to get logs from node iruya-worker2 pod pod-e2760e59-ba28-4443-a9f0-e1e2862812f2 container test-container: STEP: delete the pod Jun 3 13:25:08.831: INFO: Waiting for pod pod-e2760e59-ba28-4443-a9f0-e1e2862812f2 to disappear Jun 3 13:25:09.194: INFO: Pod pod-e2760e59-ba28-4443-a9f0-e1e2862812f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:25:09.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-706" for this suite. Jun 3 13:25:15.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:25:15.338: INFO: namespace emptydir-706 deletion completed in 6.140604809s • [SLOW TEST:10.885 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:25:15.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 3 13:25:15.439: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 3 13:25:20.444: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:25:21.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3871" for this suite. Jun 3 13:25:27.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:25:27.847: INFO: namespace replication-controller-3871 deletion completed in 6.115201334s • [SLOW TEST:12.508 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:25:27.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3273 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3273 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3273 Jun 3 13:25:30.215: INFO: Found 0 stateful pods, waiting for 1 Jun 3 13:25:40.220: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 3 13:25:40.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 13:25:40.525: INFO: stderr: "I0603 13:25:40.348224 702 log.go:172] (0xc00069e420) (0xc0003b06e0) Create stream\nI0603 13:25:40.348278 702 log.go:172] (0xc00069e420) (0xc0003b06e0) Stream added, broadcasting: 1\nI0603 13:25:40.350968 702 log.go:172] (0xc00069e420) Reply frame received for 1\nI0603 13:25:40.351109 702 log.go:172] (0xc00069e420) (0xc0003b0780) Create stream\nI0603 13:25:40.351206 702 log.go:172] (0xc00069e420) (0xc0003b0780) Stream added, broadcasting: 3\nI0603 13:25:40.352919 702 log.go:172] (0xc00069e420) Reply frame received for 3\nI0603 13:25:40.352969 702 log.go:172] (0xc00069e420) (0xc0003b0000) Create stream\nI0603 13:25:40.352991 702 log.go:172] (0xc00069e420) (0xc0003b0000) Stream added, broadcasting: 5\nI0603 13:25:40.354060 702 log.go:172] (0xc00069e420) Reply frame received for 5\nI0603 13:25:40.459978 702 log.go:172] (0xc00069e420) Data frame received for 5\nI0603 13:25:40.460002 702 log.go:172] (0xc0003b0000) (5) Data frame handling\nI0603 13:25:40.460012 702 log.go:172] (0xc0003b0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 13:25:40.515380 702 log.go:172] (0xc00069e420) Data frame received for 3\nI0603 13:25:40.515420 702 log.go:172] (0xc0003b0780) (3) Data frame handling\nI0603 13:25:40.515441 702 log.go:172] (0xc0003b0780) (3) Data frame sent\nI0603 13:25:40.515688 702 log.go:172] (0xc00069e420) Data frame received for 5\nI0603 13:25:40.515731 702 log.go:172] (0xc0003b0000) (5) Data frame handling\nI0603 13:25:40.515770 702 log.go:172] (0xc00069e420) Data frame received for 3\nI0603 13:25:40.515797 702 log.go:172] (0xc0003b0780) (3) Data frame handling\nI0603 13:25:40.518371 702 log.go:172] (0xc00069e420) Data frame received for 1\nI0603 13:25:40.518418 702 log.go:172] (0xc0003b06e0) (1) Data frame handling\nI0603 13:25:40.518464 702 log.go:172] (0xc0003b06e0) (1) Data frame sent\nI0603 13:25:40.518508 702 log.go:172] (0xc00069e420) (0xc0003b06e0) Stream removed, broadcasting: 1\nI0603 13:25:40.518568 702 log.go:172] (0xc00069e420) Go away received\nI0603 13:25:40.518950 702 log.go:172] (0xc00069e420) (0xc0003b06e0) Stream removed, broadcasting: 1\nI0603 13:25:40.518971 702 log.go:172] (0xc00069e420) (0xc0003b0780) Stream removed, broadcasting: 3\nI0603 13:25:40.518982 702 log.go:172] (0xc00069e420) (0xc0003b0000) Stream removed, broadcasting: 5\n" Jun 3 13:25:40.525: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 13:25:40.525: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 13:25:40.528: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 3 13:25:50.532: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 13:25:50.532: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 13:25:50.552: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:25:50.552: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:25:50.552: INFO: Jun 3 13:25:50.552: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 3 13:25:51.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995333339s Jun 3 13:25:52.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991219292s Jun 3 13:25:53.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985966136s Jun 3 13:25:54.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982181495s Jun 3 13:25:55.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976591762s Jun 3 13:25:56.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972390524s Jun 3 13:25:57.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.967361927s Jun 3 13:25:58.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.964079207s Jun 3 13:25:59.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 958.961857ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3273 Jun 3 13:26:00.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:00.833: INFO: stderr: "I0603 13:26:00.734968 724 log.go:172] (0xc00028e580) (0xc000a4c960) Create stream\nI0603 13:26:00.735055 724 log.go:172] (0xc00028e580) (0xc000a4c960) Stream added, broadcasting: 1\nI0603 13:26:00.740046 724 log.go:172] (0xc00028e580) Reply frame received for 1\nI0603 13:26:00.740097 724 log.go:172] (0xc00028e580) (0xc0009f0000) Create stream\nI0603 13:26:00.740120 724 log.go:172] (0xc00028e580) (0xc0009f0000) Stream added, broadcasting: 3\nI0603 13:26:00.741693 724 log.go:172] (0xc00028e580) Reply frame received for 3\nI0603 13:26:00.741759 724 log.go:172] (0xc00028e580) (0xc0009d0000) Create stream\nI0603 13:26:00.741799 724 log.go:172] (0xc00028e580) (0xc0009d0000) Stream added, broadcasting: 5\nI0603 13:26:00.742794 724 log.go:172] (0xc00028e580) Reply frame received for 5\nI0603 13:26:00.826172 724 log.go:172] (0xc00028e580) Data frame received for 3\nI0603 13:26:00.826212 724 log.go:172] (0xc00028e580) Data frame received for 5\nI0603 13:26:00.826247 724 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0603 13:26:00.826265 724 log.go:172] (0xc0009d0000) (5) Data frame sent\nI0603 13:26:00.826276 724 log.go:172] (0xc00028e580) Data frame received for 5\nI0603 13:26:00.826286 724 log.go:172] (0xc0009d0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 13:26:00.826317 724 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0603 13:26:00.826340 724 log.go:172] (0xc0009f0000) (3) Data frame sent\nI0603 13:26:00.826355 724 log.go:172] (0xc00028e580) Data frame received for 3\nI0603 13:26:00.826375 724 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0603 13:26:00.827903 724 log.go:172] (0xc00028e580) Data frame received for 1\nI0603 13:26:00.827926 724 log.go:172] (0xc000a4c960) (1) Data frame handling\nI0603 13:26:00.827954 724 log.go:172] (0xc000a4c960) (1) Data frame sent\nI0603 13:26:00.827990 724 log.go:172] (0xc00028e580) (0xc000a4c960) Stream removed, broadcasting: 1\nI0603 13:26:00.828011 724 log.go:172] (0xc00028e580) Go away received\nI0603 13:26:00.828433 724 log.go:172] (0xc00028e580) (0xc000a4c960) Stream removed, broadcasting: 1\nI0603 13:26:00.828457 724 log.go:172] (0xc00028e580) (0xc0009f0000) Stream removed, broadcasting: 3\nI0603 13:26:00.828470 724 log.go:172] (0xc00028e580) (0xc0009d0000) Stream removed, broadcasting: 5\n" Jun 3 13:26:00.833: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 13:26:00.833: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 13:26:00.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:01.054: INFO: stderr: "I0603 13:26:00.959353 745 log.go:172] (0xc00070aa50) (0xc0002fc640) Create stream\nI0603 13:26:00.959405 745 log.go:172] (0xc00070aa50) (0xc0002fc640) Stream added, broadcasting: 1\nI0603 13:26:00.962273 745 log.go:172] (0xc00070aa50) Reply frame received for 1\nI0603 13:26:00.962308 745 log.go:172] (0xc00070aa50) (0xc0002041e0) Create stream\nI0603 13:26:00.962318 745 log.go:172] (0xc00070aa50) (0xc0002041e0) Stream added, broadcasting: 3\nI0603 13:26:00.963512 745 log.go:172] (0xc00070aa50) Reply frame received for 3\nI0603 13:26:00.963580 745 log.go:172] (0xc00070aa50) (0xc000204280) Create stream\nI0603 13:26:00.963598 745 log.go:172] (0xc00070aa50) (0xc000204280) Stream added, broadcasting: 5\nI0603 13:26:00.964789 745 log.go:172] (0xc00070aa50) Reply frame received for 5\nI0603 13:26:01.043878 745 log.go:172] (0xc00070aa50) Data frame received for 5\nI0603 13:26:01.043912 745 log.go:172] (0xc000204280) (5) Data frame handling\nI0603 13:26:01.043933 745 log.go:172] (0xc000204280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 13:26:01.045469 745 log.go:172] (0xc00070aa50) Data frame received for 3\nI0603 13:26:01.045499 745 log.go:172] (0xc0002041e0) (3) Data frame handling\nI0603 13:26:01.045516 745 log.go:172] (0xc0002041e0) (3) Data frame sent\nI0603 13:26:01.045628 745 log.go:172] (0xc00070aa50) Data frame received for 5\nI0603 13:26:01.045662 745 log.go:172] (0xc000204280) (5) Data frame handling\nI0603 13:26:01.045693 745 log.go:172] (0xc000204280) (5) Data frame sent\nI0603 13:26:01.045716 745 log.go:172] (0xc00070aa50) Data frame received for 5\nI0603 13:26:01.045724 745 log.go:172] (0xc000204280) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0603 13:26:01.045745 745 log.go:172] (0xc000204280) (5) Data frame sent\nI0603 13:26:01.045990 745 log.go:172] (0xc00070aa50) Data frame received for 3\nI0603 13:26:01.046006 745 log.go:172] (0xc0002041e0) (3) Data frame handling\nI0603 13:26:01.046028 745 log.go:172] (0xc00070aa50) Data frame received for 5\nI0603 13:26:01.046054 745 log.go:172] (0xc000204280) (5) Data frame handling\nI0603 13:26:01.047780 745 log.go:172] (0xc00070aa50) Data frame received for 1\nI0603 13:26:01.047810 745 log.go:172] (0xc0002fc640) (1) Data frame handling\nI0603 13:26:01.047846 745 log.go:172] (0xc0002fc640) (1) Data frame sent\nI0603 13:26:01.047884 745 log.go:172] (0xc00070aa50) (0xc0002fc640) Stream removed, broadcasting: 1\nI0603 13:26:01.048020 745 log.go:172] (0xc00070aa50) Go away received\nI0603 13:26:01.048411 745 log.go:172] (0xc00070aa50) (0xc0002fc640) Stream removed, broadcasting: 1\nI0603 13:26:01.048433 745 log.go:172] (0xc00070aa50) (0xc0002041e0) Stream removed, broadcasting: 3\nI0603 13:26:01.048444 745 log.go:172] (0xc00070aa50) (0xc000204280) Stream removed, broadcasting: 5\n" Jun 3 13:26:01.054: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 13:26:01.054: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 13:26:01.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:01.269: INFO: stderr: "I0603 13:26:01.181103 766 log.go:172] (0xc0001166e0) (0xc000370820) Create stream\nI0603 13:26:01.181397 766 log.go:172] (0xc0001166e0) (0xc000370820) Stream added, broadcasting: 1\nI0603 13:26:01.184333 766 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0603 13:26:01.184436 766 log.go:172] (0xc0001166e0) (0xc000608000) Create stream\nI0603 13:26:01.184473 766 log.go:172] (0xc0001166e0) (0xc000608000) Stream added, broadcasting: 3\nI0603 13:26:01.186397 766 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0603 13:26:01.186450 766 log.go:172] (0xc0001166e0) (0xc000370000) Create stream\nI0603 13:26:01.186462 766 log.go:172] (0xc0001166e0) (0xc000370000) Stream added, broadcasting: 5\nI0603 13:26:01.187679 766 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0603 13:26:01.263445 766 log.go:172] (0xc0001166e0) Data frame received for 3\nI0603 13:26:01.263490 766 log.go:172] (0xc000608000) (3) Data frame handling\nI0603 13:26:01.263503 766 log.go:172] (0xc000608000) (3) Data frame sent\nI0603 13:26:01.263512 766 log.go:172] (0xc0001166e0) Data frame received for 3\nI0603 13:26:01.263521 766 log.go:172] (0xc000608000) (3) Data frame handling\nI0603 13:26:01.263573 766 log.go:172] (0xc0001166e0) Data frame received for 5\nI0603 13:26:01.263592 766 log.go:172] (0xc000370000) (5) Data frame handling\nI0603 13:26:01.263611 766 log.go:172] (0xc000370000) (5) Data frame sent\nI0603 13:26:01.263627 766 log.go:172] (0xc0001166e0) Data frame received for 5\nI0603 13:26:01.263635 766 log.go:172] (0xc000370000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0603 13:26:01.265053 766 log.go:172] (0xc0001166e0) Data frame received for 1\nI0603 13:26:01.265073 766 log.go:172] (0xc000370820) (1) Data frame handling\nI0603 13:26:01.265083 766 log.go:172] (0xc000370820) (1) Data frame sent\nI0603 13:26:01.265096 766 log.go:172] (0xc0001166e0) (0xc000370820) Stream removed, broadcasting: 1\nI0603 13:26:01.265230 766 log.go:172] (0xc0001166e0) Go away received\nI0603 13:26:01.265603 766 log.go:172] (0xc0001166e0) (0xc000370820) Stream removed, broadcasting: 1\nI0603 13:26:01.265622 766 log.go:172] (0xc0001166e0) (0xc000608000) Stream removed, broadcasting: 3\nI0603 13:26:01.265631 766 log.go:172] (0xc0001166e0) (0xc000370000) Stream removed, broadcasting: 5\n" Jun 3 13:26:01.269: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 13:26:01.269: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 13:26:01.273: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 3 13:26:11.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 13:26:11.277: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 13:26:11.277: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 3 13:26:11.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 13:26:11.501: INFO: stderr: "I0603 13:26:11.410381 788 log.go:172] (0xc00099c370) (0xc0004e4820) Create stream\nI0603 13:26:11.410436 788 log.go:172] (0xc00099c370) (0xc0004e4820) Stream added, broadcasting: 1\nI0603 13:26:11.414835 788 log.go:172] (0xc00099c370) Reply frame received for 1\nI0603 13:26:11.414872 788 log.go:172] (0xc00099c370) (0xc000690280) Create stream\nI0603 13:26:11.414881 788 log.go:172] (0xc00099c370) (0xc000690280) Stream added, broadcasting: 3\nI0603 13:26:11.415740 788 log.go:172] (0xc00099c370) Reply frame received for 3\nI0603 13:26:11.415787 788 log.go:172] (0xc00099c370) (0xc0004e4000) Create stream\nI0603 13:26:11.415804 788 log.go:172] (0xc00099c370) (0xc0004e4000) Stream added, broadcasting: 5\nI0603 13:26:11.416701 788 log.go:172] (0xc00099c370) Reply frame received for 5\nI0603 13:26:11.495392 788 log.go:172] (0xc00099c370) Data frame received for 5\nI0603 13:26:11.495446 788 log.go:172] (0xc0004e4000) (5) Data frame handling\nI0603 13:26:11.495468 788 log.go:172] (0xc0004e4000) (5) Data frame sent\nI0603 13:26:11.495500 788 log.go:172] (0xc00099c370) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 13:26:11.495538 788 log.go:172] (0xc00099c370) Data frame received for 3\nI0603 13:26:11.495597 788 log.go:172] (0xc000690280) (3) Data frame handling\nI0603 13:26:11.495613 788 log.go:172] (0xc000690280) (3) Data frame sent\nI0603 13:26:11.495624 788 log.go:172] (0xc00099c370) Data frame received for 3\nI0603 13:26:11.495639 788 log.go:172] (0xc000690280) (3) Data frame handling\nI0603 13:26:11.495682 788 log.go:172] (0xc0004e4000) (5) Data frame handling\nI0603 13:26:11.496928 788 log.go:172] (0xc00099c370) Data frame received for 1\nI0603 13:26:11.496944 788 log.go:172] (0xc0004e4820) (1) Data frame handling\nI0603 13:26:11.496953 788 log.go:172] (0xc0004e4820) (1) Data frame sent\nI0603 13:26:11.496969 788 log.go:172] (0xc00099c370) (0xc0004e4820) Stream removed, broadcasting: 1\nI0603 13:26:11.496999 788 log.go:172] (0xc00099c370) Go away received\nI0603 13:26:11.497462 788 log.go:172] (0xc00099c370) (0xc0004e4820) Stream removed, broadcasting: 1\nI0603 13:26:11.497486 788 log.go:172] (0xc00099c370) (0xc000690280) Stream removed, broadcasting: 3\nI0603 13:26:11.497497 788 log.go:172] (0xc00099c370) (0xc0004e4000) Stream removed, broadcasting: 5\n" Jun 3 13:26:11.501: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 13:26:11.501: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 13:26:11.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 13:26:11.729: INFO: stderr: "I0603 13:26:11.626987 810 log.go:172] (0xc000116e70) (0xc00094c640) Create stream\nI0603 13:26:11.627052 810 log.go:172] (0xc000116e70) (0xc00094c640) Stream added, broadcasting: 1\nI0603 13:26:11.629925 810 log.go:172] (0xc000116e70) Reply frame received for 1\nI0603 13:26:11.629969 810 log.go:172] (0xc000116e70) (0xc00094c6e0) Create stream\nI0603 13:26:11.629980 810 log.go:172] (0xc000116e70) (0xc00094c6e0) Stream added, broadcasting: 3\nI0603 13:26:11.630945 810 log.go:172] (0xc000116e70) Reply frame received for 3\nI0603 13:26:11.630997 810 log.go:172] (0xc000116e70) (0xc00094c780) Create stream\nI0603 13:26:11.631018 810 log.go:172] (0xc000116e70) (0xc00094c780) Stream added, broadcasting: 5\nI0603 13:26:11.632031 810 log.go:172] (0xc000116e70) Reply frame received for 5\nI0603 13:26:11.697966 810 log.go:172] (0xc000116e70) Data frame received for 5\nI0603 13:26:11.697999 810 log.go:172] (0xc00094c780) (5) Data frame handling\nI0603 13:26:11.698022 810 log.go:172] (0xc00094c780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 13:26:11.722433 810 log.go:172] (0xc000116e70) Data frame received for 5\nI0603 13:26:11.722522 810 log.go:172] (0xc00094c780) (5) Data frame handling\nI0603 13:26:11.722605 810 log.go:172] (0xc000116e70) Data frame received for 3\nI0603 13:26:11.722650 810 log.go:172] (0xc00094c6e0) (3) Data frame handling\nI0603 13:26:11.722668 810 log.go:172] (0xc00094c6e0) (3) Data frame sent\nI0603 13:26:11.722701 810 log.go:172] (0xc000116e70) Data frame received for 3\nI0603 13:26:11.722717 810 log.go:172] (0xc00094c6e0) (3) Data frame handling\nI0603 13:26:11.724594 810 log.go:172] (0xc000116e70) Data frame received for 1\nI0603 13:26:11.724612 810 log.go:172] (0xc00094c640) (1) Data frame handling\nI0603 13:26:11.724630 810 log.go:172] (0xc00094c640) (1) Data frame sent\nI0603 13:26:11.724648 810 log.go:172] (0xc000116e70) (0xc00094c640) Stream removed, broadcasting: 1\nI0603 13:26:11.724754 810 log.go:172] (0xc000116e70) Go away received\nI0603 13:26:11.725014 810 log.go:172] (0xc000116e70) (0xc00094c640) Stream removed, broadcasting: 1\nI0603 13:26:11.725036 810 log.go:172] (0xc000116e70) (0xc00094c6e0) Stream removed, broadcasting: 3\nI0603 13:26:11.725048 810 log.go:172] (0xc000116e70) (0xc00094c780) Stream removed, broadcasting: 5\n" Jun 3 13:26:11.729: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 13:26:11.729: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 13:26:11.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 13:26:11.980: INFO: stderr: "I0603 13:26:11.860833 830 log.go:172] (0xc0008ccf20) (0xc0008a4f00) Create stream\nI0603 13:26:11.860880 830 log.go:172] (0xc0008ccf20) (0xc0008a4f00) Stream added, broadcasting: 1\nI0603 13:26:11.863434 830 log.go:172] (0xc0008ccf20) Reply frame received for 1\nI0603 13:26:11.863478 830 log.go:172] (0xc0008ccf20) (0xc00060c140) Create stream\nI0603 13:26:11.863489 830 log.go:172] (0xc0008ccf20) (0xc00060c140) Stream added, broadcasting: 3\nI0603 13:26:11.864116 830 log.go:172] (0xc0008ccf20) Reply frame received for 3\nI0603 13:26:11.864147 830 log.go:172] (0xc0008ccf20) (0xc0008a4000) Create stream\nI0603 13:26:11.864156 830 log.go:172] (0xc0008ccf20) (0xc0008a4000) Stream added, broadcasting: 5\nI0603 13:26:11.864761 830 log.go:172] (0xc0008ccf20) Reply frame received for 5\nI0603 13:26:11.921935 830 log.go:172] (0xc0008ccf20) Data frame received for 5\nI0603 13:26:11.921958 830 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0603 13:26:11.921972 830 log.go:172] (0xc0008a4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 13:26:11.974385 830 log.go:172] (0xc0008ccf20) Data frame received for 3\nI0603 13:26:11.974492 830 log.go:172] (0xc00060c140) (3) Data frame handling\nI0603 13:26:11.974516 830 log.go:172] (0xc00060c140) (3) Data frame sent\nI0603 13:26:11.974531 830 log.go:172] (0xc0008ccf20) Data frame received for 3\nI0603 13:26:11.974539 830 log.go:172] (0xc00060c140) (3) Data frame handling\nI0603 13:26:11.974561 830 log.go:172] (0xc0008ccf20) Data frame received for 5\nI0603 13:26:11.974575 830 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0603 13:26:11.976672 830 log.go:172] (0xc0008ccf20) Data frame received for 1\nI0603 13:26:11.976691 830 log.go:172] (0xc0008a4f00) (1) Data frame handling\nI0603 13:26:11.976702 830 log.go:172] (0xc0008a4f00) (1) Data frame sent\nI0603 13:26:11.976712 830 log.go:172] (0xc0008ccf20) (0xc0008a4f00) Stream removed, broadcasting: 1\nI0603 13:26:11.976723 830 log.go:172] (0xc0008ccf20) Go away received\nI0603 13:26:11.977005 830 log.go:172] (0xc0008ccf20) (0xc0008a4f00) Stream removed, broadcasting: 1\nI0603 13:26:11.977017 830 log.go:172] (0xc0008ccf20) (0xc00060c140) Stream removed, broadcasting: 3\nI0603 13:26:11.977024 830 log.go:172] (0xc0008ccf20) (0xc0008a4000) Stream removed, broadcasting: 5\n" Jun 3 13:26:11.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 13:26:11.981: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 13:26:11.981: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 13:26:12.000: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jun 3 13:26:22.008: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 13:26:22.008: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 3 13:26:22.008: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 3 13:26:22.040: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:22.040: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:22.040: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:22.040: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:22.040: INFO: Jun 3 13:26:22.040: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:23.122: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:23.122: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:23.122: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:23.122: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:23.122: INFO: Jun 3 13:26:23.122: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:24.126: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:24.126: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:24.126: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:24.126: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:24.126: INFO: Jun 3 13:26:24.126: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:25.132: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:25.132: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:25.132: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:25.132: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:25.132: INFO: Jun 3 13:26:25.132: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:26.137: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:26.137: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:26.138: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:26.138: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:26.138: INFO: Jun 3 13:26:26.138: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:27.142: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:27.142: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:27.142: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:27.142: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:27.142: INFO: Jun 3 13:26:27.142: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:28.147: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:28.147: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:28.148: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:28.148: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:28.148: INFO: Jun 3 13:26:28.148: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:29.153: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:29.153: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:29.154: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:29.154: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:29.154: INFO: Jun 3 13:26:29.154: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:30.158: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:30.158: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:30.158: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:30.158: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:30.158: INFO: Jun 3 13:26:30.158: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 13:26:31.164: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 13:26:31.164: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:30 +0000 UTC }] Jun 3 13:26:31.164: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:31.164: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:26:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:25:50 +0000 UTC }] Jun 3 13:26:31.164: INFO: Jun 3 13:26:31.164: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3273 Jun 3 13:26:32.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:32.315: INFO: rc: 1 Jun 3 13:26:32.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0024b1890 exit status 1 true [0xc002c365a0 0xc002c365b8 0xc002c365d0] [0xc002c365a0 0xc002c365b8 0xc002c365d0] [0xc002c365b0 0xc002c365c8] [0xba70e0 0xba70e0] 0xc003142a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:26:42.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:42.409: INFO: rc: 1 Jun 3 13:26:42.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000f13440 exit status 1 true [0xc000bea1e8 0xc000bea200 0xc000bea218] [0xc000bea1e8 0xc000bea200 0xc000bea218] [0xc000bea1f8 0xc000bea210] [0xba70e0 0xba70e0] 0xc001f33500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:26:52.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:26:56.092: INFO: rc: 1 Jun 3 13:26:56.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0032ad230 exit status 1 true [0xc002afa518 0xc002afa530 0xc002afa548] [0xc002afa518 0xc002afa530 0xc002afa548] [0xc002afa528 0xc002afa540] [0xba70e0 0xba70e0] 0xc002dd2960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:06.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:06.334: INFO: rc: 1 Jun 3 13:27:06.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0032ad2f0 exit status 1 true [0xc002afa550 0xc002afa568 0xc002afa580] [0xc002afa550 0xc002afa568 0xc002afa580] [0xc002afa560 0xc002afa578] [0xba70e0 0xba70e0] 0xc002dd2cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:16.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:16.437: INFO: rc: 1 Jun 3 13:27:16.437: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0032ad3b0 exit status 1 true [0xc002afa588 0xc002afa5a0 0xc002afa5b8] [0xc002afa588 0xc002afa5a0 0xc002afa5b8] [0xc002afa598 0xc002afa5b0] [0xba70e0 0xba70e0] 0xc002dd2fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:26.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:26.526: INFO: rc: 1 Jun 3 13:27:26.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60090 exit status 1 true [0xc0006c6140 0xc0006c61e0 0xc0006c62a0] [0xc0006c6140 0xc0006c61e0 0xc0006c62a0] [0xc0006c61d0 0xc0006c6278] [0xba70e0 0xba70e0] 0xc0020fcde0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:36.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:36.618: INFO: rc: 1 Jun 3 13:27:36.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0012940c0 exit status 1 true [0xc002c36000 0xc002c36018 0xc002c36030] [0xc002c36000 0xc002c36018 0xc002c36030] [0xc002c36010 0xc002c36028] [0xba70e0 0xba70e0] 0xc001fbb7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:46.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:46.709: INFO: rc: 1 Jun 3 13:27:46.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60180 exit status 1 true [0xc0006c62f8 0xc0006c6430 0xc0006c6560] [0xc0006c62f8 0xc0006c6430 0xc0006c6560] [0xc0006c6398 0xc0006c64d8] [0xba70e0 0xba70e0] 0xc001ef8600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:27:56.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:27:56.799: INFO: rc: 1 Jun 3 13:27:56.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60270 exit status 1 true [0xc0006c65c0 0xc0006c6a98 0xc0006c6b40] [0xc0006c65c0 0xc0006c6a98 0xc0006c6b40] [0xc0006c66f8 0xc0006c6ad0] [0xba70e0 0xba70e0] 0xc001ba0a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:06.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:06.889: INFO: rc: 1 Jun 3 13:28:06.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0012941b0 exit status 1 true [0xc002c36038 0xc002c36050 0xc002c36068] [0xc002c36038 0xc002c36050 0xc002c36068] [0xc002c36048 0xc002c36060] [0xba70e0 0xba70e0] 0xc001cfa000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:16.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:16.984: INFO: rc: 1 Jun 3 13:28:16.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60360 exit status 1 true [0xc0006c6b98 0xc0006c6c38 0xc0006c6d08] [0xc0006c6b98 0xc0006c6c38 0xc0006c6d08] [0xc0006c6bd8 0xc0006c6cc8] [0xba70e0 0xba70e0] 0xc002c73500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:26.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:27.071: INFO: rc: 1 Jun 3 13:28:27.071: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0030460f0 exit status 1 true [0xc00212a000 0xc00212a018 0xc00212a030] [0xc00212a000 0xc00212a018 0xc00212a030] [0xc00212a010 0xc00212a028] [0xba70e0 0xba70e0] 0xc00247e540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:37.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:37.165: INFO: rc: 1 Jun 3 13:28:37.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001294270 exit status 1 true [0xc002c36070 0xc002c36088 0xc002c360a0] [0xc002c36070 0xc002c36088 0xc002c360a0] [0xc002c36080 0xc002c36098] [0xba70e0 0xba70e0] 0xc001cfa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:47.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:47.261: INFO: rc: 1 Jun 3 13:28:47.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046210 exit status 1 true [0xc00212a038 0xc00212a050 0xc00212a068] [0xc00212a038 0xc00212a050 0xc00212a068] [0xc00212a048 0xc00212a060] [0xba70e0 0xba70e0] 0xc00247eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:28:57.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:28:57.354: INFO: rc: 1 Jun 3 13:28:57.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00147df80 exit status 1 true [0xc0009ca000 0xc0009ca018 0xc0009ca030] [0xc0009ca000 0xc0009ca018 0xc0009ca030] [0xc0009ca010 0xc0009ca028] [0xba70e0 0xba70e0] 0xc002916240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:07.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:07.453: INFO: rc: 1 Jun 3 13:29:07.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001294390 exit status 1 true [0xc002c360a8 0xc002c360c0 0xc002c360d8] [0xc002c360a8 0xc002c360c0 0xc002c360d8] [0xc002c360b8 0xc002c360d0] [0xba70e0 0xba70e0] 0xc001cfa960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:17.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:17.539: INFO: rc: 1 Jun 3 13:29:17.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001294480 exit status 1 true [0xc002c360e0 0xc002c360f8 0xc002c36110] [0xc002c360e0 0xc002c360f8 0xc002c36110] [0xc002c360f0 0xc002c36108] [0xba70e0 0xba70e0] 0xc001cfaf00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:27.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:27.622: INFO: rc: 1 Jun 3 13:29:27.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046090 exit status 1 true [0xc00212a008 0xc00212a020 0xc00212a038] [0xc00212a008 0xc00212a020 0xc00212a038] [0xc00212a018 0xc00212a030] [0xba70e0 0xba70e0] 0xc001ba0a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:37.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:37.772: INFO: rc: 1 Jun 3 13:29:37.772: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046180 exit status 1 true [0xc00212a040 0xc00212a058 0xc00212a070] [0xc00212a040 0xc00212a058 0xc00212a070] [0xc00212a050 0xc00212a068] [0xba70e0 0xba70e0] 0xc001ef8f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:47.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:47.869: INFO: rc: 1 Jun 3 13:29:47.869: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001294090 exit status 1 true [0xc0006c6060 0xc0006c61d0 0xc0006c6278] [0xc0006c6060 0xc0006c61d0 0xc0006c6278] [0xc0006c6190 0xc0006c6210] [0xba70e0 0xba70e0] 0xc001fbb7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:29:57.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:29:58.051: INFO: rc: 1 Jun 3 13:29:58.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046270 exit status 1 true [0xc00212a078 0xc00212a090 0xc00212a0a8] [0xc00212a078 0xc00212a090 0xc00212a0a8] [0xc00212a088 0xc00212a0a0] [0xba70e0 0xba70e0] 0xc0020fc960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:08.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:08.145: INFO: rc: 1 Jun 3 13:30:08.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046330 exit status 1 true [0xc00212a0b0 0xc00212a0c8 0xc00212a0e0] [0xc00212a0b0 0xc00212a0c8 0xc00212a0e0] [0xc00212a0c0 0xc00212a0d8] [0xba70e0 0xba70e0] 0xc0020fdce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:18.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:18.235: INFO: rc: 1 Jun 3 13:30:18.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc003046420 exit status 1 true [0xc00212a0e8 0xc00212a100 0xc00212a118] [0xc00212a0e8 0xc00212a100 0xc00212a118] [0xc00212a0f8 0xc00212a110] [0xba70e0 0xba70e0] 0xc00247e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:28.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:28.321: INFO: rc: 1 Jun 3 13:30:28.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0030464e0 exit status 1 true [0xc00212a120 0xc00212a138 0xc00212a150] [0xc00212a120 0xc00212a138 0xc00212a150] [0xc00212a130 0xc00212a148] [0xba70e0 0xba70e0] 0xc00247ec00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:38.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:38.404: INFO: rc: 1 Jun 3 13:30:38.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001cfc000 exit status 1 true [0xc0009ca000 0xc0009ca018 0xc0009ca030] [0xc0009ca000 0xc0009ca018 0xc0009ca030] [0xc0009ca010 0xc0009ca028] [0xba70e0 0xba70e0] 0xc002c73620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:48.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:48.491: INFO: rc: 1 Jun 3 13:30:48.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60150 exit status 1 true [0xc002c36000 0xc002c36018 0xc002c36030] [0xc002c36000 0xc002c36018 0xc002c36030] [0xc002c36010 0xc002c36028] [0xba70e0 0xba70e0] 0xc002916240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:30:58.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:30:58.575: INFO: rc: 1 Jun 3 13:30:58.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0030465a0 exit status 1 true [0xc00212a158 0xc00212a170 0xc00212a188] [0xc00212a158 0xc00212a170 0xc00212a188] [0xc00212a168 0xc00212a180] [0xba70e0 0xba70e0] 0xc00247f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:31:08.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:31:08.680: INFO: rc: 1 Jun 3 13:31:08.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001294240 exit status 1 true [0xc0006c62a0 0xc0006c6398 0xc0006c64d8] [0xc0006c62a0 0xc0006c6398 0xc0006c64d8] [0xc0006c6318 0xc0006c6480] [0xba70e0 0xba70e0] 0xc001cfa1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:31:18.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:31:18.776: INFO: rc: 1 Jun 3 13:31:18.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002f60240 exit status 1 true [0xc002c36038 0xc002c36050 0xc002c36068] [0xc002c36038 0xc002c36050 0xc002c36068] [0xc002c36048 0xc002c36060] [0xba70e0 0xba70e0] 0xc0029166c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:31:28.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:31:28.867: INFO: rc: 1 Jun 3 13:31:28.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00147df50 exit status 1 true [0xc0009ca008 0xc0009ca020 0xc0009ca038] [0xc0009ca008 0xc0009ca020 0xc0009ca038] [0xc0009ca018 0xc0009ca030] [0xba70e0 0xba70e0] 0xc0020fcde0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 3 13:31:38.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3273 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 13:31:38.959: INFO: rc: 1 Jun 3 13:31:38.960: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Jun 3 13:31:38.960: INFO: Scaling statefulset ss to 0 Jun 3 13:31:38.967: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 3 13:31:38.969: INFO: Deleting all statefulset in ns statefulset-3273 Jun 3 13:31:38.971: INFO: Scaling statefulset ss to 0 Jun 3 13:31:38.977: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 13:31:38.979: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:31:38.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3273" for this suite. Jun 3 13:31:45.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:31:45.072: INFO: namespace statefulset-3273 deletion completed in 6.074854961s • [SLOW TEST:377.224 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:31:45.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:32:45.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9417" for this suite. Jun 3 13:33:23.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:33:23.266: INFO: namespace container-probe-9417 deletion completed in 38.071584433s • [SLOW TEST:98.193 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:33:23.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-ab058555-9cd7-4094-a210-8317fd69e880 STEP: Creating secret with name secret-projected-all-test-volume-7477ffa0-3338-41d3-95bf-346ffaebada5 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 3 13:33:23.362: INFO: Waiting up to 5m0s for pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9" in namespace "projected-611" to be "success or failure" Jun 3 13:33:23.368: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.518585ms Jun 3 13:33:25.371: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008690481s Jun 3 13:33:27.375: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012948006s Jun 3 13:33:29.378: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016229677s Jun 3 13:33:31.382: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019988295s Jun 3 13:33:33.468: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10584777s Jun 3 13:33:35.471: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.109017083s Jun 3 13:33:37.475: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.112393764s STEP: Saw pod success Jun 3 13:33:37.475: INFO: Pod "projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9" satisfied condition "success or failure" Jun 3 13:33:37.478: INFO: Trying to get logs from node iruya-worker pod projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9 container projected-all-volume-test: STEP: delete the pod Jun 3 13:33:39.521: INFO: Waiting for pod projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9 to disappear Jun 3 13:33:39.883: INFO: Pod projected-volume-058a18ef-2c4e-4103-bbe9-ce51f78364b9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:33:39.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-611" for this suite. Jun 3 13:33:46.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:33:46.159: INFO: namespace projected-611 deletion completed in 6.271541736s • [SLOW TEST:22.893 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:33:46.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:33:46.248: INFO: Creating deployment "nginx-deployment" Jun 3 13:33:46.251: INFO: Waiting for observed generation 1 Jun 3 13:33:50.685: INFO: Waiting for all required pods to come up Jun 3 13:33:54.190: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 3 13:34:14.345: INFO: Waiting for deployment "nginx-deployment" to complete Jun 3 13:34:14.348: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 3 13:34:14.352: INFO: Updating deployment nginx-deployment Jun 3 13:34:14.352: INFO: Waiting for observed generation 2 Jun 3 13:34:16.411: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 3 13:34:16.414: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 3 13:34:16.418: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 3 13:34:16.423: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 3 13:34:16.424: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 3 13:34:16.425: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 3 13:34:16.428: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 3 13:34:16.428: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 3 13:34:16.433: INFO: Updating deployment nginx-deployment Jun 3 13:34:16.433: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 3 13:34:16.555: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 3 13:34:16.634: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 3 13:34:17.989: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3914,SelfLink:/apis/apps/v1/namespaces/deployment-3914/deployments/nginx-deployment,UID:722885e7-013a-4f31-af84-eadd134353ca,ResourceVersion:14445374,Generation:3,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-03 13:34:15 +0000 UTC 2020-06-03 13:33:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-06-03 13:34:16 +0000 UTC 2020-06-03 13:34:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 3 13:34:18.665: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3914,SelfLink:/apis/apps/v1/namespaces/deployment-3914/replicasets/nginx-deployment-55fb7cb77f,UID:671002fb-35e1-42b2-bfcf-8db3629b94e1,ResourceVersion:14445368,Generation:3,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 722885e7-013a-4f31-af84-eadd134353ca 0xc001684f37 0xc001684f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 13:34:18.665: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 3 13:34:18.665: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3914,SelfLink:/apis/apps/v1/namespaces/deployment-3914/replicasets/nginx-deployment-7b8c6f4498,UID:94b42dc5-413f-4ee3-82df-55a42e7a90bc,ResourceVersion:14445366,Generation:3,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 722885e7-013a-4f31-af84-eadd134353ca 0xc001685287 0xc001685288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-2lmzl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2lmzl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-2lmzl,UID:a937dba5-7c1b-4837-a4e1-17a961e5d1c5,ResourceVersion:14445332,Generation:0,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc001685bf7 0xc001685bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001685c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001685c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-03 13:34:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-7rpdh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7rpdh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-7rpdh,UID:a6d9c8cb-297e-498b-9379-859b60b4ba41,ResourceVersion:14445379,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc001685d67 0xc001685d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001685de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001685e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-chtb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-chtb2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-chtb2,UID:400d495a-239b-44d5-afc6-06fee50004be,ResourceVersion:14445362,Generation:0,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc001685fa7 0xc001685fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-03 13:34:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-cl5k4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cl5k4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-cl5k4,UID:abce7427-e559-43b1-8e4b-de2008a17caf,ResourceVersion:14445358,Generation:0,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8117 0xc0027d8118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d81b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-03 13:34:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-ghjnp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ghjnp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-ghjnp,UID:53ae15fb-3b31-4800-abad-96872823005c,ResourceVersion:14445409,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8287 0xc0027d8288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-gskks" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gskks,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-gskks,UID:a5068f74-9ae9-4790-91ab-3c3b9a83947a,ResourceVersion:14445396,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d83a7 0xc0027d83a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8420} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.778: INFO: Pod "nginx-deployment-55fb7cb77f-j9fgb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j9fgb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-j9fgb,UID:fbdcd7fe-02b7-4122-a230-0acdf8a49b72,ResourceVersion:14445395,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d84c7 0xc0027d84c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-knx74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-knx74,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-knx74,UID:0bd5c1d2-2577-4095-8242-700307689cba,ResourceVersion:14445335,Generation:0,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d85e7 0xc0027d85e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-03 13:34:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-nr7fb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nr7fb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-nr7fb,UID:9918fdb1-3210-4dfc-b7dd-023d5d935141,ResourceVersion:14445422,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8757 0xc0027d8758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d87d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d87f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-nvm9q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nvm9q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-nvm9q,UID:275e6236-85dc-41d0-a97c-e1470f5f8548,ResourceVersion:14445410,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8877 0xc0027d8878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d88f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-qnsbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qnsbf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-qnsbf,UID:28699233-6ffb-43d0-98db-9577823e8b13,ResourceVersion:14445418,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d89e7 0xc0027d89e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-rwm2n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rwm2n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-rwm2n,UID:34fe780d-455f-4aa7-bb88-4e4ee8c6879e,ResourceVersion:14445416,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8c77 0xc0027d8c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d8d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d8d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-55fb7cb77f-wps62" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wps62,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-55fb7cb77f-wps62,UID:92d26165-6419-40b1-86f2-87a844afaa23,ResourceVersion:14445340,Generation:0,CreationTimestamp:2020-06-03 13:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 671002fb-35e1-42b2-bfcf-8db3629b94e1 0xc0027d8e87 0xc0027d8e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d90b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d90d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-03 13:34:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.779: INFO: Pod "nginx-deployment-7b8c6f4498-2tppr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tppr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-2tppr,UID:9398946b-746d-4047-bc9a-6dfa6a7b4ec9,ResourceVersion:14445293,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0027d9237 0xc0027d9238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d92b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d92d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.236,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://02f7ccef825d73d623678736eb7a8bcf3a1bddd264ca07012446864f821faa93}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-5dkzl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5dkzl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-5dkzl,UID:a1e1e8d5-f7c0-49cd-9cae-16648dc561cb,ResourceVersion:14445262,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0027d9587 0xc0027d9588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d96c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d96e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.185,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://60a65c62a27f901549cfcc9baa993e7db2c782997fdcb5add9c668ea2ca0f7d8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-5h59z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5h59z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-5h59z,UID:19288b6d-b153-43ee-8b88-68d1ba3317c2,ResourceVersion:14445291,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0027d97e7 0xc0027d97e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d9860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d9880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.188,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d2645b213f3fae95bf9a9d3da0b5e6b6d2d61bc27defa766f72b9ca57f6dcf67}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-7zh7s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7zh7s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-7zh7s,UID:deecfb11-b2bb-4333-b82c-b380ace49b9a,ResourceVersion:14445283,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0027d9b77 0xc0027d9b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027d9bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027d9c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.238,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fc49e0bce12c1b2cde50b9ec1446a549ef8dcf47bd936b52eac7a52684f6aac8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-9m5k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9m5k6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-9m5k6,UID:eb58fed0-49ce-4fb5-8750-f4e7d8dac1a7,ResourceVersion:14445426,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0027d9f77 0xc0027d9f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000362020} {node.kubernetes.io/unreachable Exists NoExecute 0xc000362040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-9nlbx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nlbx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-9nlbx,UID:78fcfa0b-8755-4e94-9702-af099c5585ec,ResourceVersion:14445408,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0003629e7 0xc0003629e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000362ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000362bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-9nntn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nntn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-9nntn,UID:566fe2a6-2adf-4fe0-b7f9-206e943655f4,ResourceVersion:14445278,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000362d87 0xc000362d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000362f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000362f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.237,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2b7c287022c3ae7f342e4df68f48caa1eb1216fb06b30686f7db89fa65b2b299}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.780: INFO: Pod "nginx-deployment-7b8c6f4498-bhkpp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhkpp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-bhkpp,UID:86b5634f-4255-4bda-86d3-fdb6eb7ed2d1,ResourceVersion:14445378,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000363207 0xc000363208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0003637d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000363800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-bnktw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bnktw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-bnktw,UID:b6d7efa4-88e9-4720-8942-9d340d9fbe7c,ResourceVersion:14445421,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000617117 0xc000617118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000617220} {node.kubernetes.io/unreachable Exists NoExecute 0xc000617250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-f525v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f525v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-f525v,UID:f2ac020d-88f2-4afc-b57a-25b9a13f3ff2,ResourceVersion:14445412,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000617357 0xc000617358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006175b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0006175f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-jbsmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jbsmc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-jbsmc,UID:bb504322-4ede-4af1-b084-4b8ce8e1c1ad,ResourceVersion:14445397,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc0006178b7 0xc0006178b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000617b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000617c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-kz2tp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kz2tp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-kz2tp,UID:577fe9a7-7054-4906-a0c8-7c55d5b75e91,ResourceVersion:14445425,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000617de7 0xc000617de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000617e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000617f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-pd45w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pd45w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-pd45w,UID:83d91447-74e6-4edc-80dc-5981215d73d2,ResourceVersion:14445424,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000617fd7 0xc000617fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e070} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-r7j8r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7j8r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-r7j8r,UID:ec8bebdd-cccf-41bc-ae94-c279317f1122,ResourceVersion:14445423,Generation:0,CreationTimestamp:2020-06-03 13:34:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e227 0xc000c0e228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e2a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.781: INFO: Pod "nginx-deployment-7b8c6f4498-rtp5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rtp5l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-rtp5l,UID:152efd3d-4f83-49fc-be1b-d5e4f6a7827b,ResourceVersion:14445398,Generation:0,CreationTimestamp:2020-06-03 13:34:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e347 0xc000c0e348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.782: INFO: Pod "nginx-deployment-7b8c6f4498-vzgds" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vzgds,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-vzgds,UID:fb2308e5-3e2a-4b40-baf0-5eb5381a7335,ResourceVersion:14445289,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e4c7 0xc000c0e4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e540} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.235,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35b1441b4125f06edbfef3806426d4700e9b0cfeb67358b64de9c029d42b7148}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.782: INFO: Pod "nginx-deployment-7b8c6f4498-w4wwv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w4wwv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-w4wwv,UID:13c1d39a-84b3-4d7e-8096-9c3399eaa5a6,ResourceVersion:14445300,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e637 0xc000c0e638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.239,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://974e9624bf54ebfdf78cca0d0209420e3fa0d0b3a02023230dd406577670b49b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.782: INFO: Pod "nginx-deployment-7b8c6f4498-zhd55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zhd55,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-zhd55,UID:0640d515-a15a-4ed2-8492-0e6a08b1d3db,ResourceVersion:14445404,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e7a7 0xc000c0e7a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e820} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.782: INFO: Pod "nginx-deployment-7b8c6f4498-zk448" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zk448,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-zk448,UID:d6337606-fd61-47fb-8d59-6950403152e6,ResourceVersion:14445406,Generation:0,CreationTimestamp:2020-06-03 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e8c7 0xc000c0e8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0e940} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0e960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 3 13:34:18.782: INFO: Pod "nginx-deployment-7b8c6f4498-zxggf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zxggf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3914,SelfLink:/api/v1/namespaces/deployment-3914/pods/nginx-deployment-7b8c6f4498-zxggf,UID:065b6f8b-8ce0-4b35-be30-e942a5d15039,ResourceVersion:14445279,Generation:0,CreationTimestamp:2020-06-03 13:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 94b42dc5-413f-4ee3-82df-55a42e7a90bc 0xc000c0e9e7 0xc000c0e9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kmswr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmswr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kmswr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c0ea60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c0ea80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:34:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:33:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.187,StartTime:2020-06-03 13:33:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-03 13:34:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8fe36e30a88690bd0f0dad7015684591a19558e74a038afe2bb894548e7eef5c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:34:18.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3914" for this suite. Jun 3 13:34:42.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:34:43.033: INFO: namespace deployment-3914 deletion completed in 24.215579309s • [SLOW TEST:56.874 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:34:43.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 3 13:34:43.329: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:34:43.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1272" for this suite. Jun 3 13:34:49.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:34:50.014: INFO: namespace kubectl-1272 deletion completed in 6.536201984s • [SLOW TEST:6.980 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:34:50.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2991, will wait for the garbage collector to delete the pods Jun 3 13:35:10.439: INFO: Deleting Job.batch foo took: 5.737018ms Jun 3 13:35:10.739: INFO: Terminating Job.batch foo pods took: 300.24339ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:36:02.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2991" for this suite. Jun 3 13:36:08.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:36:08.450: INFO: namespace job-2991 deletion completed in 6.150113851s • [SLOW TEST:78.436 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:36:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b97b847d-f32c-445b-801b-c066d40078ce in namespace container-probe-6899 Jun 3 13:36:14.517: INFO: Started pod test-webserver-b97b847d-f32c-445b-801b-c066d40078ce in namespace container-probe-6899 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 13:36:14.519: INFO: Initial restart count of pod test-webserver-b97b847d-f32c-445b-801b-c066d40078ce is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:40:16.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6899" for this suite. Jun 3 13:40:22.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:40:22.596: INFO: namespace container-probe-6899 deletion completed in 6.220578158s • [SLOW TEST:254.146 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:40:22.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 3 13:40:22.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6964 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 3 13:40:30.205: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0603 13:40:30.146726 1468 log.go:172] (0xc00011a790) (0xc0005ca500) Create stream\nI0603 13:40:30.146766 1468 log.go:172] (0xc00011a790) (0xc0005ca500) Stream added, broadcasting: 1\nI0603 13:40:30.150513 1468 log.go:172] (0xc00011a790) Reply frame received for 1\nI0603 13:40:30.150538 1468 log.go:172] (0xc00011a790) (0xc0002d5900) Create stream\nI0603 13:40:30.150545 1468 log.go:172] (0xc00011a790) (0xc0002d5900) Stream added, broadcasting: 3\nI0603 13:40:30.151456 1468 log.go:172] (0xc00011a790) Reply frame received for 3\nI0603 13:40:30.151491 1468 log.go:172] (0xc00011a790) (0xc0005ca0a0) Create stream\nI0603 13:40:30.151501 1468 log.go:172] (0xc00011a790) (0xc0005ca0a0) Stream added, broadcasting: 5\nI0603 13:40:30.152255 1468 log.go:172] (0xc00011a790) Reply frame received for 5\nI0603 13:40:30.152272 1468 log.go:172] (0xc00011a790) (0xc0005ca1e0) Create stream\nI0603 13:40:30.152278 1468 log.go:172] (0xc00011a790) (0xc0005ca1e0) Stream added, broadcasting: 7\nI0603 13:40:30.153078 1468 log.go:172] (0xc00011a790) Reply frame received for 7\nI0603 13:40:30.153359 1468 log.go:172] (0xc0002d5900) (3) Writing data frame\nI0603 13:40:30.153459 1468 log.go:172] (0xc0002d5900) (3) Writing data frame\nI0603 13:40:30.154300 1468 log.go:172] (0xc00011a790) Data frame received for 5\nI0603 13:40:30.154321 1468 log.go:172] (0xc0005ca0a0) (5) Data frame handling\nI0603 13:40:30.154339 1468 log.go:172] (0xc0005ca0a0) (5) Data frame sent\nI0603 13:40:30.154682 1468 log.go:172] (0xc00011a790) Data frame received for 5\nI0603 13:40:30.154692 1468 log.go:172] (0xc0005ca0a0) (5) Data frame handling\nI0603 13:40:30.154696 1468 log.go:172] (0xc0005ca0a0) (5) Data frame sent\nI0603 13:40:30.182079 1468 log.go:172] (0xc00011a790) Data frame received for 7\nI0603 13:40:30.182101 1468 log.go:172] (0xc0005ca1e0) (7) Data frame handling\nI0603 13:40:30.182137 1468 log.go:172] (0xc00011a790) Data frame received for 5\nI0603 13:40:30.182169 1468 log.go:172] (0xc0005ca0a0) (5) Data frame handling\nI0603 13:40:30.182535 1468 log.go:172] (0xc00011a790) Data frame received for 1\nI0603 13:40:30.182548 1468 log.go:172] (0xc0005ca500) (1) Data frame handling\nI0603 13:40:30.182555 1468 log.go:172] (0xc0005ca500) (1) Data frame sent\nI0603 13:40:30.182566 1468 log.go:172] (0xc00011a790) (0xc0005ca500) Stream removed, broadcasting: 1\nI0603 13:40:30.182622 1468 log.go:172] (0xc00011a790) (0xc0005ca500) Stream removed, broadcasting: 1\nI0603 13:40:30.182628 1468 log.go:172] (0xc00011a790) (0xc0002d5900) Stream removed, broadcasting: 3\nI0603 13:40:30.182634 1468 log.go:172] (0xc00011a790) (0xc0005ca0a0) Stream removed, broadcasting: 5\nI0603 13:40:30.182813 1468 log.go:172] (0xc00011a790) (0xc0005ca1e0) Stream removed, broadcasting: 7\nI0603 13:40:30.183052 1468 log.go:172] (0xc00011a790) Go away received\n" Jun 3 13:40:30.205: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:40:32.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6964" for this suite. Jun 3 13:40:38.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:40:38.283: INFO: namespace kubectl-6964 deletion completed in 6.067107204s • [SLOW TEST:15.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:40:38.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2851/configmap-test-db7e4131-c83b-4159-b4b0-90a31673f0b0 STEP: Creating a pod to test consume configMaps Jun 3 13:40:38.485: INFO: Waiting up to 5m0s for pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4" in namespace "configmap-2851" to be "success or failure" Jun 3 13:40:38.503: INFO: Pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.743506ms Jun 3 13:40:40.506: INFO: Pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020752758s Jun 3 13:40:42.510: INFO: Pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024884347s Jun 3 13:40:44.515: INFO: Pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0291532s STEP: Saw pod success Jun 3 13:40:44.515: INFO: Pod "pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4" satisfied condition "success or failure" Jun 3 13:40:44.517: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4 container env-test: STEP: delete the pod Jun 3 13:40:44.640: INFO: Waiting for pod pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4 to disappear Jun 3 13:40:44.895: INFO: Pod pod-configmaps-e932596a-42b1-447e-9725-7bec1cc96bb4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:40:44.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2851" for this suite. Jun 3 13:40:51.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:40:51.253: INFO: namespace configmap-2851 deletion completed in 6.35468925s • [SLOW TEST:12.971 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:40:51.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 3 13:40:51.377: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446573,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 3 13:40:51.378: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446574,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 3 13:40:51.378: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446575,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 3 13:41:01.414: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446597,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 3 13:41:01.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446598,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 3 13:41:01.414: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8733,SelfLink:/api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed,UID:2961b473-e5e5-4c42-a4bc-b43ff3bef13c,ResourceVersion:14446599,Generation:0,CreationTimestamp:2020-06-03 13:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:41:01.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8733" for this suite. Jun 3 13:41:07.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:41:07.523: INFO: namespace watch-8733 deletion completed in 6.083361314s • [SLOW TEST:16.270 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:41:07.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1aaecc08-bb0a-45b8-80d5-7d62f0985c7b STEP: Creating a pod to test consume configMaps Jun 3 13:41:07.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d" in namespace "configmap-1229" to be "success or failure" Jun 3 13:41:07.789: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004909ms Jun 3 13:41:09.792: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005180511s Jun 3 13:41:11.796: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009133783s Jun 3 13:41:13.799: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012392409s Jun 3 13:41:15.841: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054094195s Jun 3 13:41:17.864: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077703509s Jun 3 13:41:20.093: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.30608289s Jun 3 13:41:22.470: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.68371943s Jun 3 13:41:24.473: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.686524737s Jun 3 13:41:26.584: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.796872169s Jun 3 13:41:28.793: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.006588626s Jun 3 13:41:30.797: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.010489605s Jun 3 13:41:33.638: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.851076864s Jun 3 13:41:35.686: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.898875148s Jun 3 13:41:37.690: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.903589019s STEP: Saw pod success Jun 3 13:41:37.690: INFO: Pod "pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d" satisfied condition "success or failure" Jun 3 13:41:37.693: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d container configmap-volume-test: STEP: delete the pod Jun 3 13:41:38.043: INFO: Waiting for pod pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d to disappear Jun 3 13:41:38.218: INFO: Pod pod-configmaps-8f64f616-db41-4307-8739-1d9f19cd676d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:41:38.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1229" for this suite. Jun 3 13:41:44.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:41:44.407: INFO: namespace configmap-1229 deletion completed in 6.18481944s • [SLOW TEST:36.883 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:41:44.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 3 13:41:44.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8178' Jun 3 13:41:45.382: INFO: stderr: "" Jun 3 13:41:45.382: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 13:41:45.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8178' Jun 3 13:41:45.583: INFO: stderr: "" Jun 3 13:41:45.583: INFO: stdout: "update-demo-nautilus-jvp4d update-demo-nautilus-n98dd " Jun 3 13:41:45.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvp4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:45.763: INFO: stderr: "" Jun 3 13:41:45.763: INFO: stdout: "" Jun 3 13:41:45.763: INFO: update-demo-nautilus-jvp4d is created but not running Jun 3 13:41:50.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8178' Jun 3 13:41:50.888: INFO: stderr: "" Jun 3 13:41:50.888: INFO: stdout: "update-demo-nautilus-jvp4d update-demo-nautilus-n98dd " Jun 3 13:41:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvp4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:50.975: INFO: stderr: "" Jun 3 13:41:50.975: INFO: stdout: "" Jun 3 13:41:50.975: INFO: update-demo-nautilus-jvp4d is created but not running Jun 3 13:41:55.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8178' Jun 3 13:41:56.063: INFO: stderr: "" Jun 3 13:41:56.063: INFO: stdout: "update-demo-nautilus-jvp4d update-demo-nautilus-n98dd " Jun 3 13:41:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvp4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:56.160: INFO: stderr: "" Jun 3 13:41:56.160: INFO: stdout: "true" Jun 3 13:41:56.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvp4d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:56.259: INFO: stderr: "" Jun 3 13:41:56.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:41:56.259: INFO: validating pod update-demo-nautilus-jvp4d Jun 3 13:41:56.271: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:41:56.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:41:56.272: INFO: update-demo-nautilus-jvp4d is verified up and running Jun 3 13:41:56.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n98dd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:56.366: INFO: stderr: "" Jun 3 13:41:56.366: INFO: stdout: "true" Jun 3 13:41:56.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n98dd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:41:56.446: INFO: stderr: "" Jun 3 13:41:56.446: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:41:56.446: INFO: validating pod update-demo-nautilus-n98dd Jun 3 13:41:56.477: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:41:56.477: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:41:56.477: INFO: update-demo-nautilus-n98dd is verified up and running STEP: rolling-update to new replication controller Jun 3 13:41:56.480: INFO: scanned /root for discovery docs: Jun 3 13:41:56.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8178' Jun 3 13:42:19.119: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 3 13:42:19.119: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 13:42:19.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8178' Jun 3 13:42:19.206: INFO: stderr: "" Jun 3 13:42:19.206: INFO: stdout: "update-demo-kitten-qwp88 update-demo-kitten-tn444 " Jun 3 13:42:19.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qwp88 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:42:19.297: INFO: stderr: "" Jun 3 13:42:19.297: INFO: stdout: "true" Jun 3 13:42:19.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qwp88 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:42:19.397: INFO: stderr: "" Jun 3 13:42:19.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 3 13:42:19.397: INFO: validating pod update-demo-kitten-qwp88 Jun 3 13:42:19.426: INFO: got data: { "image": "kitten.jpg" } Jun 3 13:42:19.426: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 3 13:42:19.426: INFO: update-demo-kitten-qwp88 is verified up and running Jun 3 13:42:19.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tn444 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:42:19.506: INFO: stderr: "" Jun 3 13:42:19.506: INFO: stdout: "true" Jun 3 13:42:19.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tn444 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8178' Jun 3 13:42:19.593: INFO: stderr: "" Jun 3 13:42:19.594: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 3 13:42:19.594: INFO: validating pod update-demo-kitten-tn444 Jun 3 13:42:19.622: INFO: got data: { "image": "kitten.jpg" } Jun 3 13:42:19.622: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 3 13:42:19.622: INFO: update-demo-kitten-tn444 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:42:19.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8178" for this suite. Jun 3 13:42:43.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:42:43.711: INFO: namespace kubectl-8178 deletion completed in 24.085725626s • [SLOW TEST:59.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:42:43.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:42:43.756: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 3 13:42:45.831: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:42:46.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9732" for this suite. Jun 3 13:42:52.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:42:53.046: INFO: namespace replication-controller-9732 deletion completed in 6.089304927s • [SLOW TEST:9.335 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:42:53.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 13:42:56.286: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:42:56.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1840" for this suite. Jun 3 13:43:02.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:43:02.421: INFO: namespace container-runtime-1840 deletion completed in 6.092227361s • [SLOW TEST:9.374 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:43:02.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 3 13:43:02.514: INFO: Waiting up to 5m0s for pod "var-expansion-2109efd4-939d-4efd-996f-389aea573123" in namespace "var-expansion-2481" to be "success or failure" Jun 3 13:43:02.517: INFO: Pod "var-expansion-2109efd4-939d-4efd-996f-389aea573123": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573215ms Jun 3 13:43:04.522: INFO: Pod "var-expansion-2109efd4-939d-4efd-996f-389aea573123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008176622s Jun 3 13:43:06.543: INFO: Pod "var-expansion-2109efd4-939d-4efd-996f-389aea573123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029508761s STEP: Saw pod success Jun 3 13:43:06.543: INFO: Pod "var-expansion-2109efd4-939d-4efd-996f-389aea573123" satisfied condition "success or failure" Jun 3 13:43:06.546: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2109efd4-939d-4efd-996f-389aea573123 container dapi-container: STEP: delete the pod Jun 3 13:43:06.593: INFO: Waiting for pod var-expansion-2109efd4-939d-4efd-996f-389aea573123 to disappear Jun 3 13:43:06.619: INFO: Pod var-expansion-2109efd4-939d-4efd-996f-389aea573123 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:43:06.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2481" for this suite. Jun 3 13:43:12.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:43:12.716: INFO: namespace var-expansion-2481 deletion completed in 6.092907926s • [SLOW TEST:10.294 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:43:12.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4896 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 13:43:12.843: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 3 13:43:42.976: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.212:8080/dial?request=hostName&protocol=http&host=10.244.2.211&port=8080&tries=1'] Namespace:pod-network-test-4896 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 13:43:42.976: INFO: >>> kubeConfig: /root/.kube/config I0603 13:43:43.007446 6 log.go:172] (0xc000dca580) (0xc001358aa0) Create stream I0603 13:43:43.007487 6 log.go:172] (0xc000dca580) (0xc001358aa0) Stream added, broadcasting: 1 I0603 13:43:43.009917 6 log.go:172] (0xc000dca580) Reply frame received for 1 I0603 13:43:43.009959 6 log.go:172] (0xc000dca580) (0xc001358c80) Create stream I0603 13:43:43.009972 6 log.go:172] (0xc000dca580) (0xc001358c80) Stream added, broadcasting: 3 I0603 13:43:43.010856 6 log.go:172] (0xc000dca580) Reply frame received for 3 I0603 13:43:43.010893 6 log.go:172] (0xc000dca580) (0xc001358dc0) Create stream I0603 13:43:43.010907 6 log.go:172] (0xc000dca580) (0xc001358dc0) Stream added, broadcasting: 5 I0603 13:43:43.011817 6 log.go:172] (0xc000dca580) Reply frame received for 5 I0603 13:43:43.155322 6 log.go:172] (0xc000dca580) Data frame received for 3 I0603 13:43:43.155417 6 log.go:172] (0xc001358c80) (3) Data frame handling I0603 13:43:43.155465 6 log.go:172] (0xc001358c80) (3) Data frame sent I0603 13:43:43.155873 6 log.go:172] (0xc000dca580) Data frame received for 3 I0603 13:43:43.155893 6 log.go:172] (0xc001358c80) (3) Data frame handling I0603 13:43:43.155928 6 log.go:172] (0xc000dca580) Data frame received for 5 I0603 13:43:43.155960 6 log.go:172] (0xc001358dc0) (5) Data frame handling I0603 13:43:43.157744 6 log.go:172] (0xc000dca580) Data frame received for 1 I0603 13:43:43.157760 6 log.go:172] (0xc001358aa0) (1) Data frame handling I0603 13:43:43.157766 6 log.go:172] (0xc001358aa0) (1) Data frame sent I0603 13:43:43.157774 6 log.go:172] (0xc000dca580) (0xc001358aa0) Stream removed, broadcasting: 1 I0603 13:43:43.157808 6 log.go:172] (0xc000dca580) Go away received I0603 13:43:43.157849 6 log.go:172] (0xc000dca580) (0xc001358aa0) Stream removed, broadcasting: 1 I0603 13:43:43.157866 6 log.go:172] (0xc000dca580) (0xc001358c80) Stream removed, broadcasting: 3 I0603 13:43:43.157878 6 log.go:172] (0xc000dca580) (0xc001358dc0) Stream removed, broadcasting: 5 Jun 3 13:43:43.157: INFO: Waiting for endpoints: map[] Jun 3 13:43:43.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.212:8080/dial?request=hostName&protocol=http&host=10.244.1.6&port=8080&tries=1'] Namespace:pod-network-test-4896 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 13:43:43.161: INFO: >>> kubeConfig: /root/.kube/config I0603 13:43:43.190331 6 log.go:172] (0xc00079a9a0) (0xc001106280) Create stream I0603 13:43:43.190362 6 log.go:172] (0xc00079a9a0) (0xc001106280) Stream added, broadcasting: 1 I0603 13:43:43.192051 6 log.go:172] (0xc00079a9a0) Reply frame received for 1 I0603 13:43:43.192094 6 log.go:172] (0xc00079a9a0) (0xc001106320) Create stream I0603 13:43:43.192108 6 log.go:172] (0xc00079a9a0) (0xc001106320) Stream added, broadcasting: 3 I0603 13:43:43.192972 6 log.go:172] (0xc00079a9a0) Reply frame received for 3 I0603 13:43:43.193005 6 log.go:172] (0xc00079a9a0) (0xc002764280) Create stream I0603 13:43:43.193017 6 log.go:172] (0xc00079a9a0) (0xc002764280) Stream added, broadcasting: 5 I0603 13:43:43.194128 6 log.go:172] (0xc00079a9a0) Reply frame received for 5 I0603 13:43:43.255537 6 log.go:172] (0xc00079a9a0) Data frame received for 3 I0603 13:43:43.255562 6 log.go:172] (0xc001106320) (3) Data frame handling I0603 13:43:43.255579 6 log.go:172] (0xc001106320) (3) Data frame sent I0603 13:43:43.256200 6 log.go:172] (0xc00079a9a0) Data frame received for 3 I0603 13:43:43.256223 6 log.go:172] (0xc001106320) (3) Data frame handling I0603 13:43:43.256321 6 log.go:172] (0xc00079a9a0) Data frame received for 5 I0603 13:43:43.256337 6 log.go:172] (0xc002764280) (5) Data frame handling I0603 13:43:43.258491 6 log.go:172] (0xc00079a9a0) Data frame received for 1 I0603 13:43:43.258513 6 log.go:172] (0xc001106280) (1) Data frame handling I0603 13:43:43.258526 6 log.go:172] (0xc001106280) (1) Data frame sent I0603 13:43:43.258547 6 log.go:172] (0xc00079a9a0) (0xc001106280) Stream removed, broadcasting: 1 I0603 13:43:43.258618 6 log.go:172] (0xc00079a9a0) Go away received I0603 13:43:43.258645 6 log.go:172] (0xc00079a9a0) (0xc001106280) Stream removed, broadcasting: 1 I0603 13:43:43.258659 6 log.go:172] (0xc00079a9a0) (0xc001106320) Stream removed, broadcasting: 3 I0603 13:43:43.258671 6 log.go:172] (0xc00079a9a0) (0xc002764280) Stream removed, broadcasting: 5 Jun 3 13:43:43.258: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:43:43.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4896" for this suite. Jun 3 13:44:07.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:44:07.374: INFO: namespace pod-network-test-4896 deletion completed in 24.112125025s • [SLOW TEST:54.658 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:44:07.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 13:44:07.479: INFO: Waiting up to 5m0s for pod "pod-a2170baa-ebba-48e0-af02-78dc5e103c40" in namespace "emptydir-4262" to be "success or failure" Jun 3 13:44:07.489: INFO: Pod "pod-a2170baa-ebba-48e0-af02-78dc5e103c40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044744ms Jun 3 13:44:09.493: INFO: Pod "pod-a2170baa-ebba-48e0-af02-78dc5e103c40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014433744s Jun 3 13:44:11.497: INFO: Pod "pod-a2170baa-ebba-48e0-af02-78dc5e103c40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018414224s STEP: Saw pod success Jun 3 13:44:11.497: INFO: Pod "pod-a2170baa-ebba-48e0-af02-78dc5e103c40" satisfied condition "success or failure" Jun 3 13:44:11.499: INFO: Trying to get logs from node iruya-worker pod pod-a2170baa-ebba-48e0-af02-78dc5e103c40 container test-container: STEP: delete the pod Jun 3 13:44:11.620: INFO: Waiting for pod pod-a2170baa-ebba-48e0-af02-78dc5e103c40 to disappear Jun 3 13:44:11.778: INFO: Pod pod-a2170baa-ebba-48e0-af02-78dc5e103c40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:44:11.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4262" for this suite. Jun 3 13:44:17.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:44:17.873: INFO: namespace emptydir-4262 deletion completed in 6.091035728s • [SLOW TEST:10.498 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:44:17.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 3 13:44:21.990: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-642a25c0-83d6-4219-9678-0afab40f4a13,GenerateName:,Namespace:events-1375,SelfLink:/api/v1/namespaces/events-1375/pods/send-events-642a25c0-83d6-4219-9678-0afab40f4a13,UID:b26ea714-a4a3-43bc-920c-c7c5f79dcc06,ResourceVersion:14447352,Generation:0,CreationTimestamp:2020-06-03 13:44:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 949894007,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9v48g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v48g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9v48g true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021f9b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021f9ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:44:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:44:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:44:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 13:44:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.7,StartTime:2020-06-03 13:44:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-03 13:44:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://6c9ae9de503e5e91d121d361031f780ae99d965374a14fd66507c589e3240540}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 3 13:44:23.995: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 3 13:44:26.000: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:44:26.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1375" for this suite. Jun 3 13:45:04.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:45:04.135: INFO: namespace events-1375 deletion completed in 38.118957145s • [SLOW TEST:46.260 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:45:04.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 in namespace container-probe-2554 Jun 3 13:45:08.201: INFO: Started pod liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 in namespace container-probe-2554 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 13:45:08.205: INFO: Initial restart count of pod liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is 0 Jun 3 13:45:26.249: INFO: Restart count of pod container-probe-2554/liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is now 1 (18.04430922s elapsed) Jun 3 13:45:46.292: INFO: Restart count of pod container-probe-2554/liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is now 2 (38.086513837s elapsed) Jun 3 13:46:08.338: INFO: Restart count of pod container-probe-2554/liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is now 3 (1m0.133004241s elapsed) Jun 3 13:46:28.378: INFO: Restart count of pod container-probe-2554/liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is now 4 (1m20.172506488s elapsed) Jun 3 13:47:36.803: INFO: Restart count of pod container-probe-2554/liveness-6a733b69-536a-494e-b60a-98ff42bd0aa8 is now 5 (2m28.598249043s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:47:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2554" for this suite. Jun 3 13:47:42.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:47:42.951: INFO: namespace container-probe-2554 deletion completed in 6.109180505s • [SLOW TEST:158.816 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:47:42.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:47:43.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356" in namespace "projected-480" to be "success or failure" Jun 3 13:47:43.017: INFO: Pod "downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356": Phase="Pending", Reason="", readiness=false. Elapsed: 3.943577ms Jun 3 13:47:45.038: INFO: Pod "downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024896894s Jun 3 13:47:47.043: INFO: Pod "downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029274574s STEP: Saw pod success Jun 3 13:47:47.043: INFO: Pod "downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356" satisfied condition "success or failure" Jun 3 13:47:47.046: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356 container client-container: STEP: delete the pod Jun 3 13:47:47.066: INFO: Waiting for pod downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356 to disappear Jun 3 13:47:47.070: INFO: Pod downwardapi-volume-1c102f9b-490d-4def-92d4-7f733b797356 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:47:47.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-480" for this suite. Jun 3 13:47:53.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:47:53.162: INFO: namespace projected-480 deletion completed in 6.088326027s • [SLOW TEST:10.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:47:53.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-9371 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9371 STEP: Deleting pre-stop pod Jun 3 13:48:06.258: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:48:06.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9371" for this suite. Jun 3 13:48:44.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:48:44.391: INFO: namespace prestop-9371 deletion completed in 38.117904383s • [SLOW TEST:51.228 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:48:44.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 3 13:48:52.508: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:48:52.515: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:48:54.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:48:54.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:48:56.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:48:56.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:48:58.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:48:58.519: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:00.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:00.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:02.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:02.519: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:04.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:04.519: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:06.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:06.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:08.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:08.585: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:10.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:10.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:12.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:12.520: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:14.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:14.537: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:16.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:16.519: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:18.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:18.519: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:20.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:20.542: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 13:49:22.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 13:49:22.520: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:49:22.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5812" for this suite. Jun 3 13:49:44.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:49:44.639: INFO: namespace container-lifecycle-hook-5812 deletion completed in 22.114606995s • [SLOW TEST:60.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:49:44.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:50:06.750: INFO: Container started at 2020-06-03 13:49:47 +0000 UTC, pod became ready at 2020-06-03 13:50:05 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:50:06.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6703" for this suite. Jun 3 13:50:28.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:50:28.880: INFO: namespace container-probe-6703 deletion completed in 22.126521414s • [SLOW TEST:44.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:50:28.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 3 13:50:34.013: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:50:35.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9421" for this suite. Jun 3 13:50:57.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:50:57.153: INFO: namespace replicaset-9421 deletion completed in 22.10500092s • [SLOW TEST:28.272 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:50:57.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:50:57.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d" in namespace "downward-api-4956" to be "success or failure" Jun 3 13:50:57.263: INFO: Pod "downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131176ms Jun 3 13:50:59.323: INFO: Pod "downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063895841s Jun 3 13:51:01.327: INFO: Pod "downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067482026s STEP: Saw pod success Jun 3 13:51:01.327: INFO: Pod "downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d" satisfied condition "success or failure" Jun 3 13:51:01.330: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d container client-container: STEP: delete the pod Jun 3 13:51:01.361: INFO: Waiting for pod downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d to disappear Jun 3 13:51:01.365: INFO: Pod downwardapi-volume-b1e1e929-78bc-4ab1-82ee-b90b49f8d82d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:51:01.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4956" for this suite. Jun 3 13:51:07.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:51:07.475: INFO: namespace downward-api-4956 deletion completed in 6.107149513s • [SLOW TEST:10.322 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:51:07.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-n6n6 STEP: Creating a pod to test atomic-volume-subpath Jun 3 13:51:07.566: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-n6n6" in namespace "subpath-3064" to be "success or failure" Jun 3 13:51:07.570: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621298ms Jun 3 13:51:09.574: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008064905s Jun 3 13:51:11.578: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012296898s Jun 3 13:51:13.582: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 6.016304445s Jun 3 13:51:15.587: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 8.020566942s Jun 3 13:51:17.590: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 10.024387814s Jun 3 13:51:19.594: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 12.028079684s Jun 3 13:51:21.600: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 14.034209666s Jun 3 13:51:23.605: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 16.038788999s Jun 3 13:51:25.608: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 18.04225647s Jun 3 13:51:27.613: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 20.046555484s Jun 3 13:51:29.617: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Running", Reason="", readiness=true. Elapsed: 22.051020324s Jun 3 13:51:31.621: INFO: Pod "pod-subpath-test-projected-n6n6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055217342s STEP: Saw pod success Jun 3 13:51:31.621: INFO: Pod "pod-subpath-test-projected-n6n6" satisfied condition "success or failure" Jun 3 13:51:31.624: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-n6n6 container test-container-subpath-projected-n6n6: STEP: delete the pod Jun 3 13:51:31.690: INFO: Waiting for pod pod-subpath-test-projected-n6n6 to disappear Jun 3 13:51:31.832: INFO: Pod pod-subpath-test-projected-n6n6 no longer exists STEP: Deleting pod pod-subpath-test-projected-n6n6 Jun 3 13:51:31.832: INFO: Deleting pod "pod-subpath-test-projected-n6n6" in namespace "subpath-3064" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:51:31.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3064" for this suite. Jun 3 13:51:37.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:51:37.932: INFO: namespace subpath-3064 deletion completed in 6.093847844s • [SLOW TEST:30.457 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:51:37.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 13:51:37.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9992' Jun 3 13:51:40.516: INFO: stderr: "" Jun 3 13:51:40.516: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 3 13:51:40.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9992' Jun 3 13:51:51.863: INFO: stderr: "" Jun 3 13:51:51.863: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:51:51.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9992" for this suite. Jun 3 13:51:57.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:51:57.960: INFO: namespace kubectl-9992 deletion completed in 6.094015148s • [SLOW TEST:20.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:51:57.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-df545fb1-f8ea-40b1-be3f-f68561be1c7c STEP: Creating a pod to test consume configMaps Jun 3 13:51:58.049: INFO: Waiting up to 5m0s for pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f" in namespace "configmap-3712" to be "success or failure" Jun 3 13:51:58.056: INFO: Pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.33681ms Jun 3 13:52:00.060: INFO: Pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010732065s Jun 3 13:52:02.065: INFO: Pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.015337017s Jun 3 13:52:04.068: INFO: Pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019228304s STEP: Saw pod success Jun 3 13:52:04.069: INFO: Pod "pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f" satisfied condition "success or failure" Jun 3 13:52:04.070: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f container configmap-volume-test: STEP: delete the pod Jun 3 13:52:04.099: INFO: Waiting for pod pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f to disappear Jun 3 13:52:04.162: INFO: Pod pod-configmaps-2df179c7-ad63-4fff-a626-39e815041d8f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:52:04.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3712" for this suite. Jun 3 13:52:10.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:52:10.261: INFO: namespace configmap-3712 deletion completed in 6.095237843s • [SLOW TEST:12.301 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:52:10.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1653 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1653 to expose endpoints map[] Jun 3 13:52:10.392: INFO: Get endpoints failed (13.73598ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 3 13:52:11.395: INFO: successfully validated that service endpoint-test2 in namespace services-1653 exposes endpoints map[] (1.017119742s elapsed) STEP: Creating pod pod1 in namespace services-1653 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1653 to expose endpoints map[pod1:[80]] Jun 3 13:52:15.466: INFO: successfully validated that service endpoint-test2 in namespace services-1653 exposes endpoints map[pod1:[80]] (4.064152141s elapsed) STEP: Creating pod pod2 in namespace services-1653 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1653 to expose endpoints map[pod1:[80] pod2:[80]] Jun 3 13:52:18.604: INFO: successfully validated that service endpoint-test2 in namespace services-1653 exposes endpoints map[pod1:[80] pod2:[80]] (3.135475212s elapsed) STEP: Deleting pod pod1 in namespace services-1653 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1653 to expose endpoints map[pod2:[80]] Jun 3 13:52:19.659: INFO: successfully validated that service endpoint-test2 in namespace services-1653 exposes endpoints map[pod2:[80]] (1.049525159s elapsed) STEP: Deleting pod pod2 in namespace services-1653 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1653 to expose endpoints map[] Jun 3 13:52:20.863: INFO: successfully validated that service endpoint-test2 in namespace services-1653 exposes endpoints map[] (1.19988149s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:52:20.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1653" for this suite. Jun 3 13:52:26.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:52:27.035: INFO: namespace services-1653 deletion completed in 6.091326655s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.774 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:52:27.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 3 13:52:31.683: INFO: Successfully updated pod "labelsupdate7728a407-1bf8-4c6a-9b0e-b999a13b76ff" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:52:33.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-202" for this suite. Jun 3 13:52:55.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:52:55.858: INFO: namespace projected-202 deletion completed in 22.107510993s • [SLOW TEST:28.822 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:52:55.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:52:55.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129" in namespace "downward-api-5927" to be "success or failure" Jun 3 13:52:55.932: INFO: Pod "downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129": Phase="Pending", Reason="", readiness=false. Elapsed: 3.031359ms Jun 3 13:52:57.936: INFO: Pod "downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007479335s Jun 3 13:52:59.941: INFO: Pod "downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01198897s STEP: Saw pod success Jun 3 13:52:59.941: INFO: Pod "downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129" satisfied condition "success or failure" Jun 3 13:52:59.944: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129 container client-container: STEP: delete the pod Jun 3 13:52:59.990: INFO: Waiting for pod downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129 to disappear Jun 3 13:52:59.992: INFO: Pod downwardapi-volume-713731d6-4f50-45b4-906e-d32db3cba129 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:52:59.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5927" for this suite. Jun 3 13:53:06.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:53:06.096: INFO: namespace downward-api-5927 deletion completed in 6.100880806s • [SLOW TEST:10.238 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:53:06.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:53:06.184: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:53:10.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-669" for this suite. Jun 3 13:53:56.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:53:56.471: INFO: namespace pods-669 deletion completed in 46.123700731s • [SLOW TEST:50.375 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:53:56.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:53:56.531: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.053511ms) Jun 3 13:53:56.535: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.749357ms) Jun 3 13:53:56.539: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.959188ms) Jun 3 13:53:56.543: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.444177ms) Jun 3 13:53:56.547: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.777067ms) Jun 3 13:53:56.550: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.692724ms) Jun 3 13:53:56.554: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.160368ms) Jun 3 13:53:56.557: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.332449ms) Jun 3 13:53:56.560: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.997431ms) Jun 3 13:53:56.563: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.197494ms) Jun 3 13:53:56.577: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 14.13802ms) Jun 3 13:53:56.581: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.807813ms) Jun 3 13:53:56.584: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.863276ms) Jun 3 13:53:56.587: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.356648ms) Jun 3 13:53:56.590: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.897617ms) Jun 3 13:53:56.594: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.149752ms) Jun 3 13:53:56.596: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.694216ms) Jun 3 13:53:56.599: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.076389ms) Jun 3 13:53:56.602: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.861135ms) Jun 3 13:53:56.606: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.340241ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:53:56.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3903" for this suite. Jun 3 13:54:02.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:54:02.711: INFO: namespace proxy-3903 deletion completed in 6.102092333s • [SLOW TEST:6.239 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:54:02.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 3 13:54:02.781: INFO: Waiting up to 5m0s for pod "pod-edc76404-b554-4107-83b6-696ad24585d2" in namespace "emptydir-3702" to be "success or failure" Jun 3 13:54:02.816: INFO: Pod "pod-edc76404-b554-4107-83b6-696ad24585d2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.035768ms Jun 3 13:54:04.821: INFO: Pod "pod-edc76404-b554-4107-83b6-696ad24585d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039683674s Jun 3 13:54:06.825: INFO: Pod "pod-edc76404-b554-4107-83b6-696ad24585d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043923779s STEP: Saw pod success Jun 3 13:54:06.825: INFO: Pod "pod-edc76404-b554-4107-83b6-696ad24585d2" satisfied condition "success or failure" Jun 3 13:54:06.828: INFO: Trying to get logs from node iruya-worker pod pod-edc76404-b554-4107-83b6-696ad24585d2 container test-container: STEP: delete the pod Jun 3 13:54:06.865: INFO: Waiting for pod pod-edc76404-b554-4107-83b6-696ad24585d2 to disappear Jun 3 13:54:06.910: INFO: Pod pod-edc76404-b554-4107-83b6-696ad24585d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:54:06.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3702" for this suite. Jun 3 13:54:13.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:54:13.240: INFO: namespace emptydir-3702 deletion completed in 6.325891083s • [SLOW TEST:10.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:54:13.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 13:54:13.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675" in namespace "downward-api-6965" to be "success or failure" Jun 3 13:54:13.311: INFO: Pod "downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675": Phase="Pending", Reason="", readiness=false. Elapsed: 23.902889ms Jun 3 13:54:15.314: INFO: Pod "downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02718439s Jun 3 13:54:17.319: INFO: Pod "downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031664667s STEP: Saw pod success Jun 3 13:54:17.319: INFO: Pod "downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675" satisfied condition "success or failure" Jun 3 13:54:17.322: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675 container client-container: STEP: delete the pod Jun 3 13:54:17.359: INFO: Waiting for pod downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675 to disappear Jun 3 13:54:17.365: INFO: Pod downwardapi-volume-84fca89f-f5c5-4ce9-9969-dd880b7bb675 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:54:17.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6965" for this suite. Jun 3 13:54:23.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:54:23.453: INFO: namespace downward-api-6965 deletion completed in 6.08404236s • [SLOW TEST:10.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:54:23.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 3 13:54:23.518: INFO: Waiting up to 5m0s for pod "var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c" in namespace "var-expansion-3751" to be "success or failure" Jun 3 13:54:23.538: INFO: Pod "var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.657228ms Jun 3 13:54:25.543: INFO: Pod "var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025222748s Jun 3 13:54:27.547: INFO: Pod "var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029347396s STEP: Saw pod success Jun 3 13:54:27.547: INFO: Pod "var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c" satisfied condition "success or failure" Jun 3 13:54:27.550: INFO: Trying to get logs from node iruya-worker pod var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c container dapi-container: STEP: delete the pod Jun 3 13:54:27.569: INFO: Waiting for pod var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c to disappear Jun 3 13:54:27.574: INFO: Pod var-expansion-5ea7b4d9-defd-434b-8de7-3657e0382b6c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:54:27.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3751" for this suite. Jun 3 13:54:33.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:54:33.719: INFO: namespace var-expansion-3751 deletion completed in 6.142504522s • [SLOW TEST:10.265 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:54:33.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 3 13:54:34.297: INFO: created pod pod-service-account-defaultsa Jun 3 13:54:34.297: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 3 13:54:34.306: INFO: created pod pod-service-account-mountsa Jun 3 13:54:34.306: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 3 13:54:34.374: INFO: created pod pod-service-account-nomountsa Jun 3 13:54:34.375: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 3 13:54:34.389: INFO: created pod pod-service-account-defaultsa-mountspec Jun 3 13:54:34.389: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 3 13:54:34.455: INFO: created pod pod-service-account-mountsa-mountspec Jun 3 13:54:34.455: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 3 13:54:34.519: INFO: created pod pod-service-account-nomountsa-mountspec Jun 3 13:54:34.519: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 3 13:54:34.563: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 3 13:54:34.563: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 3 13:54:34.717: INFO: created pod pod-service-account-mountsa-nomountspec Jun 3 13:54:34.717: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 3 13:54:34.737: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 3 13:54:34.737: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:54:34.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8516" for this suite. Jun 3 13:55:02.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:55:03.019: INFO: namespace svcaccounts-8516 deletion completed in 28.231459743s • [SLOW TEST:29.300 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:55:03.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9939 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9939 STEP: Creating statefulset with conflicting port in namespace statefulset-9939 STEP: Waiting until pod test-pod will start running in namespace statefulset-9939 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9939 Jun 3 13:55:07.101: INFO: Observed stateful pod in namespace: statefulset-9939, name: ss-0, uid: 12eb21fb-a587-4008-bb0a-588a2b5e6660, status phase: Pending. Waiting for statefulset controller to delete. Jun 3 13:55:07.491: INFO: Observed stateful pod in namespace: statefulset-9939, name: ss-0, uid: 12eb21fb-a587-4008-bb0a-588a2b5e6660, status phase: Failed. Waiting for statefulset controller to delete. Jun 3 13:55:07.499: INFO: Observed stateful pod in namespace: statefulset-9939, name: ss-0, uid: 12eb21fb-a587-4008-bb0a-588a2b5e6660, status phase: Failed. Waiting for statefulset controller to delete. Jun 3 13:55:07.504: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9939 STEP: Removing pod with conflicting port in namespace statefulset-9939 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9939 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 3 13:55:13.604: INFO: Deleting all statefulset in ns statefulset-9939 Jun 3 13:55:13.608: INFO: Scaling statefulset ss to 0 Jun 3 13:55:23.636: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 13:55:23.656: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:55:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9939" for this suite. Jun 3 13:55:29.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:55:29.787: INFO: namespace statefulset-9939 deletion completed in 6.116500511s • [SLOW TEST:26.767 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:55:29.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:55:29.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5192" for this suite. Jun 3 13:55:35.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:55:36.052: INFO: namespace kubelet-test-5192 deletion completed in 6.091242755s • [SLOW TEST:6.265 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:55:36.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 3 13:55:36.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9794' Jun 3 13:55:36.325: INFO: stderr: "" Jun 3 13:55:36.325: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 13:55:36.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:36.427: INFO: stderr: "" Jun 3 13:55:36.427: INFO: stdout: "update-demo-nautilus-727g2 update-demo-nautilus-pqrrw " Jun 3 13:55:36.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-727g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:36.521: INFO: stderr: "" Jun 3 13:55:36.521: INFO: stdout: "" Jun 3 13:55:36.521: INFO: update-demo-nautilus-727g2 is created but not running Jun 3 13:55:41.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:41.626: INFO: stderr: "" Jun 3 13:55:41.626: INFO: stdout: "update-demo-nautilus-727g2 update-demo-nautilus-pqrrw " Jun 3 13:55:41.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-727g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:41.727: INFO: stderr: "" Jun 3 13:55:41.727: INFO: stdout: "true" Jun 3 13:55:41.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-727g2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:41.822: INFO: stderr: "" Jun 3 13:55:41.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:55:41.822: INFO: validating pod update-demo-nautilus-727g2 Jun 3 13:55:41.826: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:55:41.826: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:55:41.826: INFO: update-demo-nautilus-727g2 is verified up and running Jun 3 13:55:41.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:41.924: INFO: stderr: "" Jun 3 13:55:41.924: INFO: stdout: "true" Jun 3 13:55:41.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:42.008: INFO: stderr: "" Jun 3 13:55:42.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:55:42.008: INFO: validating pod update-demo-nautilus-pqrrw Jun 3 13:55:42.012: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:55:42.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:55:42.012: INFO: update-demo-nautilus-pqrrw is verified up and running STEP: scaling down the replication controller Jun 3 13:55:42.014: INFO: scanned /root for discovery docs: Jun 3 13:55:42.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9794' Jun 3 13:55:43.132: INFO: stderr: "" Jun 3 13:55:43.132: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 13:55:43.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:43.296: INFO: stderr: "" Jun 3 13:55:43.296: INFO: stdout: "update-demo-nautilus-727g2 update-demo-nautilus-pqrrw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 13:55:48.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:48.400: INFO: stderr: "" Jun 3 13:55:48.400: INFO: stdout: "update-demo-nautilus-727g2 update-demo-nautilus-pqrrw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 13:55:53.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:53.507: INFO: stderr: "" Jun 3 13:55:53.507: INFO: stdout: "update-demo-nautilus-pqrrw " Jun 3 13:55:53.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:53.612: INFO: stderr: "" Jun 3 13:55:53.612: INFO: stdout: "true" Jun 3 13:55:53.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:53.700: INFO: stderr: "" Jun 3 13:55:53.700: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:55:53.700: INFO: validating pod update-demo-nautilus-pqrrw Jun 3 13:55:53.703: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:55:53.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:55:53.703: INFO: update-demo-nautilus-pqrrw is verified up and running STEP: scaling up the replication controller Jun 3 13:55:53.704: INFO: scanned /root for discovery docs: Jun 3 13:55:53.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9794' Jun 3 13:55:54.833: INFO: stderr: "" Jun 3 13:55:54.833: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 13:55:54.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:55:54.926: INFO: stderr: "" Jun 3 13:55:54.926: INFO: stdout: "update-demo-nautilus-94klr update-demo-nautilus-pqrrw " Jun 3 13:55:54.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94klr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:55:55.020: INFO: stderr: "" Jun 3 13:55:55.020: INFO: stdout: "" Jun 3 13:55:55.020: INFO: update-demo-nautilus-94klr is created but not running Jun 3 13:56:00.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9794' Jun 3 13:56:00.119: INFO: stderr: "" Jun 3 13:56:00.119: INFO: stdout: "update-demo-nautilus-94klr update-demo-nautilus-pqrrw " Jun 3 13:56:00.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94klr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:56:00.215: INFO: stderr: "" Jun 3 13:56:00.215: INFO: stdout: "true" Jun 3 13:56:00.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-94klr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:56:00.323: INFO: stderr: "" Jun 3 13:56:00.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:56:00.323: INFO: validating pod update-demo-nautilus-94klr Jun 3 13:56:00.329: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:56:00.329: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:56:00.329: INFO: update-demo-nautilus-94klr is verified up and running Jun 3 13:56:00.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:56:00.421: INFO: stderr: "" Jun 3 13:56:00.421: INFO: stdout: "true" Jun 3 13:56:00.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqrrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9794' Jun 3 13:56:00.511: INFO: stderr: "" Jun 3 13:56:00.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 13:56:00.512: INFO: validating pod update-demo-nautilus-pqrrw Jun 3 13:56:00.515: INFO: got data: { "image": "nautilus.jpg" } Jun 3 13:56:00.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 13:56:00.515: INFO: update-demo-nautilus-pqrrw is verified up and running STEP: using delete to clean up resources Jun 3 13:56:00.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9794' Jun 3 13:56:00.610: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 13:56:00.610: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 13:56:00.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9794' Jun 3 13:56:00.701: INFO: stderr: "No resources found.\n" Jun 3 13:56:00.701: INFO: stdout: "" Jun 3 13:56:00.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9794 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 13:56:00.801: INFO: stderr: "" Jun 3 13:56:00.801: INFO: stdout: "update-demo-nautilus-94klr\nupdate-demo-nautilus-pqrrw\n" Jun 3 13:56:01.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9794' Jun 3 13:56:01.419: INFO: stderr: "No resources found.\n" Jun 3 13:56:01.419: INFO: stdout: "" Jun 3 13:56:01.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9794 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 13:56:01.517: INFO: stderr: "" Jun 3 13:56:01.517: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:56:01.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9794" for this suite. Jun 3 13:56:23.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:56:23.679: INFO: namespace kubectl-9794 deletion completed in 22.159315876s • [SLOW TEST:47.627 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:56:23.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-0d8da2cc-0a79-49ab-a722-7d47113799cc STEP: Creating a pod to test consume secrets Jun 3 13:56:23.783: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72" in namespace "projected-4018" to be "success or failure" Jun 3 13:56:23.786: INFO: Pod "pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277984ms Jun 3 13:56:25.790: INFO: Pod "pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007758005s Jun 3 13:56:27.795: INFO: Pod "pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012322974s STEP: Saw pod success Jun 3 13:56:27.795: INFO: Pod "pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72" satisfied condition "success or failure" Jun 3 13:56:27.797: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72 container projected-secret-volume-test: STEP: delete the pod Jun 3 13:56:27.818: INFO: Waiting for pod pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72 to disappear Jun 3 13:56:27.842: INFO: Pod pod-projected-secrets-5a67ac0a-089f-44cc-83de-4508a2498b72 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:56:27.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4018" for this suite. Jun 3 13:56:33.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:56:33.951: INFO: namespace projected-4018 deletion completed in 6.104666124s • [SLOW TEST:10.271 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:56:33.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d3efe500-512a-456b-9eb8-b2d53ebdc700 STEP: Creating a pod to test consume configMaps Jun 3 13:56:34.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b" in namespace "configmap-2046" to be "success or failure" Jun 3 13:56:34.044: INFO: Pod "pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025491ms Jun 3 13:56:36.094: INFO: Pod "pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053137159s Jun 3 13:56:38.098: INFO: Pod "pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057068957s STEP: Saw pod success Jun 3 13:56:38.098: INFO: Pod "pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b" satisfied condition "success or failure" Jun 3 13:56:38.101: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b container configmap-volume-test: STEP: delete the pod Jun 3 13:56:38.123: INFO: Waiting for pod pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b to disappear Jun 3 13:56:38.128: INFO: Pod pod-configmaps-d6d0aa76-68d9-400a-99ae-9e62f905a61b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:56:38.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2046" for this suite. Jun 3 13:56:44.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:56:44.221: INFO: namespace configmap-2046 deletion completed in 6.090592233s • [SLOW TEST:10.270 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:56:44.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7d88f81a-ca82-4441-84a1-e6c270bcd2a8 STEP: Creating a pod to test consume configMaps Jun 3 13:56:44.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170" in namespace "projected-3929" to be "success or failure" Jun 3 13:56:44.338: INFO: Pod "pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170": Phase="Pending", Reason="", readiness=false. Elapsed: 8.372828ms Jun 3 13:56:46.342: INFO: Pod "pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012776527s Jun 3 13:56:48.347: INFO: Pod "pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01694573s STEP: Saw pod success Jun 3 13:56:48.347: INFO: Pod "pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170" satisfied condition "success or failure" Jun 3 13:56:48.350: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170 container projected-configmap-volume-test: STEP: delete the pod Jun 3 13:56:48.635: INFO: Waiting for pod pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170 to disappear Jun 3 13:56:48.644: INFO: Pod pod-projected-configmaps-b6d22fba-88fb-4eeb-a847-a83a5040f170 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:56:48.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3929" for this suite. Jun 3 13:56:54.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:56:54.735: INFO: namespace projected-3929 deletion completed in 6.088028019s • [SLOW TEST:10.513 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:56:54.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 13:56:54.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3988' Jun 3 13:56:55.116: INFO: stderr: "" Jun 3 13:56:55.116: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 3 13:56:55.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3988' Jun 3 13:56:55.465: INFO: stderr: "" Jun 3 13:56:55.465: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 3 13:56:56.470: INFO: Selector matched 1 pods for map[app:redis] Jun 3 13:56:56.470: INFO: Found 0 / 1 Jun 3 13:56:57.470: INFO: Selector matched 1 pods for map[app:redis] Jun 3 13:56:57.470: INFO: Found 0 / 1 Jun 3 13:56:58.470: INFO: Selector matched 1 pods for map[app:redis] Jun 3 13:56:58.470: INFO: Found 1 / 1 Jun 3 13:56:58.470: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 13:56:58.473: INFO: Selector matched 1 pods for map[app:redis] Jun 3 13:56:58.473: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 13:56:58.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-8ljzk --namespace=kubectl-3988' Jun 3 13:56:58.581: INFO: stderr: "" Jun 3 13:56:58.581: INFO: stdout: "Name: redis-master-8ljzk\nNamespace: kubectl-3988\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Wed, 03 Jun 2020 13:56:55 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.28\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://4517b42774cbefb89c9dd5dc1c189c83e31bd4953b8f8726a887a619d605165f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 03 Jun 2020 13:56:57 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gxm26 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gxm26:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gxm26\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-3988/redis-master-8ljzk to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Jun 3 13:56:58.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3988' Jun 3 13:56:58.703: INFO: stderr: "" Jun 3 13:56:58.703: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3988\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-8ljzk\n" Jun 3 13:56:58.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3988' Jun 3 13:56:58.805: INFO: stderr: "" Jun 3 13:56:58.805: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3988\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.106.31.224\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.28:6379\nSession Affinity: None\nEvents: \n" Jun 3 13:56:58.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 3 13:56:58.945: INFO: stderr: "" Jun 3 13:56:58.946: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 03 Jun 2020 13:55:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 03 Jun 2020 13:55:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 03 Jun 2020 13:55:59 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 03 Jun 2020 13:55:59 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 79d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 79d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 79d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 79d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 79d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 79d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 79d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 3 13:56:58.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3988' Jun 3 13:56:59.049: INFO: stderr: "" Jun 3 13:56:59.049: INFO: stdout: "Name: kubectl-3988\nLabels: e2e-framework=kubectl\n e2e-run=bae40aaf-a3eb-4160-9ff1-016a42a00545\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:56:59.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3988" for this suite. Jun 3 13:57:21.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:57:21.174: INFO: namespace kubectl-3988 deletion completed in 22.121047853s • [SLOW TEST:26.439 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:57:21.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 13:57:21.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4212' Jun 3 13:57:21.350: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 3 13:57:21.350: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 3 13:57:25.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4212' Jun 3 13:57:25.510: INFO: stderr: "" Jun 3 13:57:25.510: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:57:25.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4212" for this suite. Jun 3 13:57:47.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:57:47.584: INFO: namespace kubectl-4212 deletion completed in 22.070417144s • [SLOW TEST:26.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:57:47.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0603 13:57:48.764163 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 13:57:48.764: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:57:48.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8560" for this suite. Jun 3 13:57:54.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:57:54.890: INFO: namespace gc-8560 deletion completed in 6.123203056s • [SLOW TEST:7.306 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:57:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5ae8ede8-b9f6-4c9f-9086-e3add165f358 STEP: Creating a pod to test consume configMaps Jun 3 13:57:54.980: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e" in namespace "projected-9907" to be "success or failure" Jun 3 13:57:55.005: INFO: Pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.047037ms Jun 3 13:57:57.010: INFO: Pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029305568s Jun 3 13:57:59.014: INFO: Pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e": Phase="Running", Reason="", readiness=true. Elapsed: 4.033658739s Jun 3 13:58:01.019: INFO: Pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038318042s STEP: Saw pod success Jun 3 13:58:01.019: INFO: Pod "pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e" satisfied condition "success or failure" Jun 3 13:58:01.035: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e container projected-configmap-volume-test: STEP: delete the pod Jun 3 13:58:01.066: INFO: Waiting for pod pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e to disappear Jun 3 13:58:01.076: INFO: Pod pod-projected-configmaps-b2817d77-7663-4332-a532-d03a02d86f4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:58:01.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9907" for this suite. Jun 3 13:58:07.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:58:07.253: INFO: namespace projected-9907 deletion completed in 6.158128085s • [SLOW TEST:12.362 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:58:07.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3854 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 13:58:07.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 3 13:58:35.468: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.234:8080/dial?request=hostName&protocol=udp&host=10.244.2.233&port=8081&tries=1'] Namespace:pod-network-test-3854 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 13:58:35.468: INFO: >>> kubeConfig: /root/.kube/config I0603 13:58:35.504739 6 log.go:172] (0xc000de6f20) (0xc0027b79a0) Create stream I0603 13:58:35.504772 6 log.go:172] (0xc000de6f20) (0xc0027b79a0) Stream added, broadcasting: 1 I0603 13:58:35.507282 6 log.go:172] (0xc000de6f20) Reply frame received for 1 I0603 13:58:35.507342 6 log.go:172] (0xc000de6f20) (0xc0027b7a40) Create stream I0603 13:58:35.507357 6 log.go:172] (0xc000de6f20) (0xc0027b7a40) Stream added, broadcasting: 3 I0603 13:58:35.508348 6 log.go:172] (0xc000de6f20) Reply frame received for 3 I0603 13:58:35.508392 6 log.go:172] (0xc000de6f20) (0xc002961c20) Create stream I0603 13:58:35.508405 6 log.go:172] (0xc000de6f20) (0xc002961c20) Stream added, broadcasting: 5 I0603 13:58:35.509488 6 log.go:172] (0xc000de6f20) Reply frame received for 5 I0603 13:58:35.587741 6 log.go:172] (0xc000de6f20) Data frame received for 3 I0603 13:58:35.587801 6 log.go:172] (0xc0027b7a40) (3) Data frame handling I0603 13:58:35.587853 6 log.go:172] (0xc0027b7a40) (3) Data frame sent I0603 13:58:35.588172 6 log.go:172] (0xc000de6f20) Data frame received for 5 I0603 13:58:35.588202 6 log.go:172] (0xc002961c20) (5) Data frame handling I0603 13:58:35.588261 6 log.go:172] (0xc000de6f20) Data frame received for 3 I0603 13:58:35.588333 6 log.go:172] (0xc0027b7a40) (3) Data frame handling I0603 13:58:35.590409 6 log.go:172] (0xc000de6f20) Data frame received for 1 I0603 13:58:35.590439 6 log.go:172] (0xc0027b79a0) (1) Data frame handling I0603 13:58:35.590457 6 log.go:172] (0xc0027b79a0) (1) Data frame sent I0603 13:58:35.590472 6 log.go:172] (0xc000de6f20) (0xc0027b79a0) Stream removed, broadcasting: 1 I0603 13:58:35.590556 6 log.go:172] (0xc000de6f20) (0xc0027b79a0) Stream removed, broadcasting: 1 I0603 13:58:35.590575 6 log.go:172] (0xc000de6f20) Go away received I0603 13:58:35.590606 6 log.go:172] (0xc000de6f20) (0xc0027b7a40) Stream removed, broadcasting: 3 I0603 13:58:35.590626 6 log.go:172] (0xc000de6f20) (0xc002961c20) Stream removed, broadcasting: 5 Jun 3 13:58:35.590: INFO: Waiting for endpoints: map[] Jun 3 13:58:35.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.234:8080/dial?request=hostName&protocol=udp&host=10.244.1.31&port=8081&tries=1'] Namespace:pod-network-test-3854 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 13:58:35.594: INFO: >>> kubeConfig: /root/.kube/config I0603 13:58:35.628034 6 log.go:172] (0xc002902dc0) (0xc002062820) Create stream I0603 13:58:35.628132 6 log.go:172] (0xc002902dc0) (0xc002062820) Stream added, broadcasting: 1 I0603 13:58:35.630935 6 log.go:172] (0xc002902dc0) Reply frame received for 1 I0603 13:58:35.630991 6 log.go:172] (0xc002902dc0) (0xc0029ee0a0) Create stream I0603 13:58:35.631008 6 log.go:172] (0xc002902dc0) (0xc0029ee0a0) Stream added, broadcasting: 3 I0603 13:58:35.632337 6 log.go:172] (0xc002902dc0) Reply frame received for 3 I0603 13:58:35.632368 6 log.go:172] (0xc002902dc0) (0xc002961d60) Create stream I0603 13:58:35.632379 6 log.go:172] (0xc002902dc0) (0xc002961d60) Stream added, broadcasting: 5 I0603 13:58:35.633372 6 log.go:172] (0xc002902dc0) Reply frame received for 5 I0603 13:58:35.698550 6 log.go:172] (0xc002902dc0) Data frame received for 3 I0603 13:58:35.698581 6 log.go:172] (0xc0029ee0a0) (3) Data frame handling I0603 13:58:35.698604 6 log.go:172] (0xc0029ee0a0) (3) Data frame sent I0603 13:58:35.699682 6 log.go:172] (0xc002902dc0) Data frame received for 3 I0603 13:58:35.699719 6 log.go:172] (0xc0029ee0a0) (3) Data frame handling I0603 13:58:35.699766 6 log.go:172] (0xc002902dc0) Data frame received for 5 I0603 13:58:35.699789 6 log.go:172] (0xc002961d60) (5) Data frame handling I0603 13:58:35.702076 6 log.go:172] (0xc002902dc0) Data frame received for 1 I0603 13:58:35.702102 6 log.go:172] (0xc002062820) (1) Data frame handling I0603 13:58:35.702110 6 log.go:172] (0xc002062820) (1) Data frame sent I0603 13:58:35.702119 6 log.go:172] (0xc002902dc0) (0xc002062820) Stream removed, broadcasting: 1 I0603 13:58:35.702172 6 log.go:172] (0xc002902dc0) Go away received I0603 13:58:35.702202 6 log.go:172] (0xc002902dc0) (0xc002062820) Stream removed, broadcasting: 1 I0603 13:58:35.702218 6 log.go:172] (0xc002902dc0) (0xc0029ee0a0) Stream removed, broadcasting: 3 I0603 13:58:35.702227 6 log.go:172] (0xc002902dc0) (0xc002961d60) Stream removed, broadcasting: 5 Jun 3 13:58:35.702: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:58:35.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3854" for this suite. Jun 3 13:58:59.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:58:59.807: INFO: namespace pod-network-test-3854 deletion completed in 24.101005065s • [SLOW TEST:52.553 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:58:59.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 3 13:59:07.921: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:07.949: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:09.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:09.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:11.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:11.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:13.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:13.953: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:15.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:15.953: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:17.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:17.953: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:19.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:19.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:21.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:21.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:23.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:23.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:25.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:25.953: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:27.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:27.954: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:29.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:29.953: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 13:59:31.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 13:59:31.953: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:59:31.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7569" for this suite. Jun 3 13:59:53.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 13:59:54.054: INFO: namespace container-lifecycle-hook-7569 deletion completed in 22.092286857s • [SLOW TEST:54.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 13:59:54.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 3 13:59:54.180: INFO: Waiting up to 5m0s for pod "downward-api-8349c87f-1574-4286-a991-f0e604741b0d" in namespace "downward-api-3702" to be "success or failure" Jun 3 13:59:54.196: INFO: Pod "downward-api-8349c87f-1574-4286-a991-f0e604741b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.40884ms Jun 3 13:59:56.200: INFO: Pod "downward-api-8349c87f-1574-4286-a991-f0e604741b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019942574s Jun 3 13:59:58.204: INFO: Pod "downward-api-8349c87f-1574-4286-a991-f0e604741b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023900779s STEP: Saw pod success Jun 3 13:59:58.204: INFO: Pod "downward-api-8349c87f-1574-4286-a991-f0e604741b0d" satisfied condition "success or failure" Jun 3 13:59:58.207: INFO: Trying to get logs from node iruya-worker pod downward-api-8349c87f-1574-4286-a991-f0e604741b0d container dapi-container: STEP: delete the pod Jun 3 13:59:58.223: INFO: Waiting for pod downward-api-8349c87f-1574-4286-a991-f0e604741b0d to disappear Jun 3 13:59:58.227: INFO: Pod downward-api-8349c87f-1574-4286-a991-f0e604741b0d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 13:59:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3702" for this suite. Jun 3 14:00:04.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:00:04.381: INFO: namespace downward-api-3702 deletion completed in 6.151092386s • [SLOW TEST:10.326 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:00:04.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 14:00:04.663: INFO: Waiting up to 5m0s for pod "pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd" in namespace "emptydir-4850" to be "success or failure" Jun 3 14:00:04.671: INFO: Pod "pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.003855ms Jun 3 14:00:06.696: INFO: Pod "pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033104621s Jun 3 14:00:08.702: INFO: Pod "pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038208192s STEP: Saw pod success Jun 3 14:00:08.702: INFO: Pod "pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd" satisfied condition "success or failure" Jun 3 14:00:08.705: INFO: Trying to get logs from node iruya-worker2 pod pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd container test-container: STEP: delete the pod Jun 3 14:00:08.818: INFO: Waiting for pod pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd to disappear Jun 3 14:00:08.888: INFO: Pod pod-c54a11a4-8849-44cd-b97a-56faaae3e7cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:00:08.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4850" for this suite. Jun 3 14:00:14.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:00:14.986: INFO: namespace emptydir-4850 deletion completed in 6.094584367s • [SLOW TEST:10.604 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:00:14.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 3 14:00:19.642: INFO: Successfully updated pod "labelsupdate769e37ac-492a-4fd1-854f-80c8a36a021b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:00:23.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4518" for this suite. Jun 3 14:00:45.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:00:45.778: INFO: namespace downward-api-4518 deletion completed in 22.084724261s • [SLOW TEST:30.792 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:00:45.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:00:52.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2903" for this suite. Jun 3 14:00:58.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:00:58.208: INFO: namespace namespaces-2903 deletion completed in 6.151745909s STEP: Destroying namespace "nsdeletetest-3711" for this suite. Jun 3 14:00:58.210: INFO: Namespace nsdeletetest-3711 was already deleted STEP: Destroying namespace "nsdeletetest-5871" for this suite. Jun 3 14:01:04.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:01:04.306: INFO: namespace nsdeletetest-5871 deletion completed in 6.09635589s • [SLOW TEST:18.527 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:01:04.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 14:01:04.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4871' Jun 3 14:01:04.473: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 3 14:01:04.473: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 3 14:01:04.479: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 3 14:01:04.488: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 3 14:01:04.531: INFO: scanned /root for discovery docs: Jun 3 14:01:04.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4871' Jun 3 14:01:20.368: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 3 14:01:20.368: INFO: stdout: "Created e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb\nScaling up e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 3 14:01:20.368: INFO: stdout: "Created e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb\nScaling up e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 3 14:01:20.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4871' Jun 3 14:01:20.483: INFO: stderr: "" Jun 3 14:01:20.483: INFO: stdout: "e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb-vsdp2 e2e-test-nginx-rc-lk6hp " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jun 3 14:01:25.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4871' Jun 3 14:01:25.582: INFO: stderr: "" Jun 3 14:01:25.582: INFO: stdout: "e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb-vsdp2 " Jun 3 14:01:25.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb-vsdp2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4871' Jun 3 14:01:25.668: INFO: stderr: "" Jun 3 14:01:25.668: INFO: stdout: "true" Jun 3 14:01:25.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb-vsdp2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4871' Jun 3 14:01:25.772: INFO: stderr: "" Jun 3 14:01:25.772: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 3 14:01:25.772: INFO: e2e-test-nginx-rc-c4955d2174c71963adbae1f1d40508bb-vsdp2 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 3 14:01:25.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4871' Jun 3 14:01:25.874: INFO: stderr: "" Jun 3 14:01:25.874: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:01:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4871" for this suite. Jun 3 14:01:31.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:01:31.992: INFO: namespace kubectl-4871 deletion completed in 6.100113429s • [SLOW TEST:27.684 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:01:31.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2035.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2035.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2035.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2035.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 14:01:38.116: INFO: DNS probes using dns-2035/dns-test-3beef844-4670-42ac-ae71-84dc455b988a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:01:38.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2035" for this suite. Jun 3 14:01:44.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:01:44.310: INFO: namespace dns-2035 deletion completed in 6.127193136s • [SLOW TEST:12.318 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:01:44.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-cvjk STEP: Creating a pod to test atomic-volume-subpath Jun 3 14:01:44.427: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cvjk" in namespace "subpath-9329" to be "success or failure" Jun 3 14:01:44.455: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 27.889891ms Jun 3 14:01:46.459: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031490087s Jun 3 14:01:48.462: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 4.035445485s Jun 3 14:01:50.467: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 6.039666489s Jun 3 14:01:52.471: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 8.044446644s Jun 3 14:01:54.475: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 10.048417213s Jun 3 14:01:56.480: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 12.053465092s Jun 3 14:01:58.485: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 14.058166356s Jun 3 14:02:00.490: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 16.062750209s Jun 3 14:02:02.494: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 18.067222762s Jun 3 14:02:04.498: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 20.071461201s Jun 3 14:02:06.503: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Running", Reason="", readiness=true. Elapsed: 22.07582215s Jun 3 14:02:08.507: INFO: Pod "pod-subpath-test-secret-cvjk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.079984528s STEP: Saw pod success Jun 3 14:02:08.507: INFO: Pod "pod-subpath-test-secret-cvjk" satisfied condition "success or failure" Jun 3 14:02:08.510: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-cvjk container test-container-subpath-secret-cvjk: STEP: delete the pod Jun 3 14:02:08.586: INFO: Waiting for pod pod-subpath-test-secret-cvjk to disappear Jun 3 14:02:08.593: INFO: Pod pod-subpath-test-secret-cvjk no longer exists STEP: Deleting pod pod-subpath-test-secret-cvjk Jun 3 14:02:08.593: INFO: Deleting pod "pod-subpath-test-secret-cvjk" in namespace "subpath-9329" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:02:08.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9329" for this suite. Jun 3 14:02:14.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:02:14.711: INFO: namespace subpath-9329 deletion completed in 6.111853668s • [SLOW TEST:30.400 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:02:14.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:02:14.790: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 3 14:02:19.795: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 14:02:19.795: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 3 14:02:21.799: INFO: Creating deployment "test-rollover-deployment" Jun 3 14:02:21.814: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 3 14:02:23.824: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 3 14:02:23.830: INFO: Ensure that both replica sets have 1 created replica Jun 3 14:02:23.835: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 3 14:02:23.842: INFO: Updating deployment test-rollover-deployment Jun 3 14:02:23.842: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 3 14:02:25.866: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 3 14:02:25.872: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 3 14:02:25.877: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:25.877: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789744, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:27.883: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:27.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789747, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:29.886: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:29.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789747, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:31.887: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:31.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789747, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:33.885: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:33.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789747, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:35.886: INFO: all replica sets need to contain the pod-template-hash label Jun 3 14:02:35.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789747, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:37.925: INFO: Jun 3 14:02:37.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789757, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726789741, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:02:39.885: INFO: Jun 3 14:02:39.885: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 3 14:02:39.895: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6575,SelfLink:/apis/apps/v1/namespaces/deployment-6575/deployments/test-rollover-deployment,UID:97e3116c-ac64-4efd-b0d1-2f2621687ac0,ResourceVersion:14451025,Generation:2,CreationTimestamp:2020-06-03 14:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-03 14:02:21 +0000 UTC 2020-06-03 14:02:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-03 14:02:37 +0000 UTC 2020-06-03 14:02:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 3 14:02:39.898: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6575,SelfLink:/apis/apps/v1/namespaces/deployment-6575/replicasets/test-rollover-deployment-854595fc44,UID:cfa9d538-fd69-4d0d-b882-03fdab17f135,ResourceVersion:14451014,Generation:2,CreationTimestamp:2020-06-03 14:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97e3116c-ac64-4efd-b0d1-2f2621687ac0 0xc0027492b7 0xc0027492b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 3 14:02:39.898: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 3 14:02:39.898: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6575,SelfLink:/apis/apps/v1/namespaces/deployment-6575/replicasets/test-rollover-controller,UID:faa81a9b-38f6-4137-b5c1-c1f4d16a5fda,ResourceVersion:14451023,Generation:2,CreationTimestamp:2020-06-03 14:02:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97e3116c-ac64-4efd-b0d1-2f2621687ac0 0xc002749167 0xc002749168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 14:02:39.899: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6575,SelfLink:/apis/apps/v1/namespaces/deployment-6575/replicasets/test-rollover-deployment-9b8b997cf,UID:789e3ef8-1b20-4204-ad1a-3ab7ee86c286,ResourceVersion:14450975,Generation:2,CreationTimestamp:2020-06-03 14:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97e3116c-ac64-4efd-b0d1-2f2621687ac0 0xc002749400 0xc002749401}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 14:02:39.902: INFO: Pod "test-rollover-deployment-854595fc44-gp7sg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gp7sg,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6575,SelfLink:/api/v1/namespaces/deployment-6575/pods/test-rollover-deployment-854595fc44-gp7sg,UID:68926274-84f8-482c-8e43-ee30d17bd566,ResourceVersion:14450991,Generation:0,CreationTimestamp:2020-06-03 14:02:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 cfa9d538-fd69-4d0d-b882-03fdab17f135 0xc003330737 0xc003330738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dfvll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dfvll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dfvll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033307b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033307d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:02:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:02:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:02:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:02:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.37,StartTime:2020-06-03 14:02:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-03 14:02:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3de451ccc875cfcb84f5a5c6971cc2e876036914fb5b96fd5724ce87bc119b62}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:02:39.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6575" for this suite. Jun 3 14:02:47.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:02:48.023: INFO: namespace deployment-6575 deletion completed in 8.117235496s • [SLOW TEST:33.312 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:02:48.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 3 14:02:56.180: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:02:56.199: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:02:58.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:02:58.204: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:00.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:00.203: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:02.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:02.204: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:04.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:04.204: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:06.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:06.203: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:08.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:08.204: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:10.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:10.204: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 14:03:12.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 14:03:12.204: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:03:12.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9242" for this suite. Jun 3 14:03:34.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:03:34.322: INFO: namespace container-lifecycle-hook-9242 deletion completed in 22.106307828s • [SLOW TEST:46.298 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:03:34.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-43d55935-4847-4c53-ba72-03b3fa513e31 STEP: Creating a pod to test consume secrets Jun 3 14:03:34.442: INFO: Waiting up to 5m0s for pod "pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5" in namespace "secrets-9994" to be "success or failure" Jun 3 14:03:34.445: INFO: Pod "pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681765ms Jun 3 14:03:36.450: INFO: Pod "pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008437204s Jun 3 14:03:38.460: INFO: Pod "pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01823649s STEP: Saw pod success Jun 3 14:03:38.460: INFO: Pod "pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5" satisfied condition "success or failure" Jun 3 14:03:38.462: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5 container secret-volume-test: STEP: delete the pod Jun 3 14:03:38.488: INFO: Waiting for pod pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5 to disappear Jun 3 14:03:38.505: INFO: Pod pod-secrets-30eb7fcb-1677-407a-ba7a-822475ce1cf5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:03:38.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9994" for this suite. Jun 3 14:03:44.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:03:44.622: INFO: namespace secrets-9994 deletion completed in 6.113494565s • [SLOW TEST:10.300 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:03:44.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:03:44.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2555" for this suite. Jun 3 14:03:50.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:03:50.795: INFO: namespace services-2555 deletion completed in 6.089450298s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.172 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:03:50.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:03:50.893: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 3 14:03:50.901: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:50.906: INFO: Number of nodes with available pods: 0 Jun 3 14:03:50.906: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:03:51.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:51.915: INFO: Number of nodes with available pods: 0 Jun 3 14:03:51.915: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:03:52.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:52.915: INFO: Number of nodes with available pods: 0 Jun 3 14:03:52.915: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:03:53.963: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:53.968: INFO: Number of nodes with available pods: 0 Jun 3 14:03:53.968: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:03:54.928: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:54.981: INFO: Number of nodes with available pods: 1 Jun 3 14:03:54.981: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:03:55.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:55.915: INFO: Number of nodes with available pods: 1 Jun 3 14:03:55.915: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:03:56.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:56.915: INFO: Number of nodes with available pods: 2 Jun 3 14:03:56.915: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 3 14:03:56.999: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:56.999: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:57.016: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:58.020: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:58.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:58.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:03:59.026: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:59.026: INFO: Pod daemon-set-ghf68 is not available Jun 3 14:03:59.026: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:03:59.030: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:00.020: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:00.020: INFO: Pod daemon-set-ghf68 is not available Jun 3 14:04:00.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:00.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:01.020: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:01.020: INFO: Pod daemon-set-ghf68 is not available Jun 3 14:04:01.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:01.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:02.021: INFO: Wrong image for pod: daemon-set-ghf68. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:02.021: INFO: Pod daemon-set-ghf68 is not available Jun 3 14:04:02.021: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:02.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:03.021: INFO: Pod daemon-set-d5c76 is not available Jun 3 14:04:03.021: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:03.026: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:04.083: INFO: Pod daemon-set-d5c76 is not available Jun 3 14:04:04.083: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:04.088: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:05.020: INFO: Pod daemon-set-d5c76 is not available Jun 3 14:04:05.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:05.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:06.035: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:06.039: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:07.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:07.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:08.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:08.020: INFO: Pod daemon-set-sh2kq is not available Jun 3 14:04:08.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:09.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:09.020: INFO: Pod daemon-set-sh2kq is not available Jun 3 14:04:09.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:10.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:10.020: INFO: Pod daemon-set-sh2kq is not available Jun 3 14:04:10.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:11.020: INFO: Wrong image for pod: daemon-set-sh2kq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 3 14:04:11.020: INFO: Pod daemon-set-sh2kq is not available Jun 3 14:04:11.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:12.020: INFO: Pod daemon-set-6crwg is not available Jun 3 14:04:12.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 3 14:04:12.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:12.030: INFO: Number of nodes with available pods: 1 Jun 3 14:04:12.030: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:04:13.035: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:13.039: INFO: Number of nodes with available pods: 1 Jun 3 14:04:13.039: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:04:14.060: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:14.063: INFO: Number of nodes with available pods: 1 Jun 3 14:04:14.063: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:04:15.035: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:04:15.059: INFO: Number of nodes with available pods: 2 Jun 3 14:04:15.059: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7892, will wait for the garbage collector to delete the pods Jun 3 14:04:15.133: INFO: Deleting DaemonSet.extensions daemon-set took: 6.655228ms Jun 3 14:04:15.433: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.326875ms Jun 3 14:04:22.238: INFO: Number of nodes with available pods: 0 Jun 3 14:04:22.238: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 14:04:22.241: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7892/daemonsets","resourceVersion":"14451419"},"items":null} Jun 3 14:04:22.243: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7892/pods","resourceVersion":"14451419"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:04:22.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7892" for this suite. Jun 3 14:04:28.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:04:28.399: INFO: namespace daemonsets-7892 deletion completed in 6.143394837s • [SLOW TEST:37.604 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:04:28.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-c09a9833-010f-4943-93ae-57b353a5c4fb [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:04:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9321" for this suite. Jun 3 14:04:34.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:04:34.541: INFO: namespace configmap-9321 deletion completed in 6.088515232s • [SLOW TEST:6.142 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:04:34.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:04:34.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834" in namespace "downward-api-9154" to be "success or failure" Jun 3 14:04:34.677: INFO: Pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834": Phase="Pending", Reason="", readiness=false. Elapsed: 60.421497ms Jun 3 14:04:36.856: INFO: Pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239801294s Jun 3 14:04:38.861: INFO: Pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834": Phase="Running", Reason="", readiness=true. Elapsed: 4.244482934s Jun 3 14:04:40.865: INFO: Pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249043915s STEP: Saw pod success Jun 3 14:04:40.866: INFO: Pod "downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834" satisfied condition "success or failure" Jun 3 14:04:40.868: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834 container client-container: STEP: delete the pod Jun 3 14:04:40.907: INFO: Waiting for pod downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834 to disappear Jun 3 14:04:40.920: INFO: Pod downwardapi-volume-e3d5fcde-7a07-41ac-bba2-5b9b36173834 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:04:40.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9154" for this suite. Jun 3 14:04:46.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:04:47.010: INFO: namespace downward-api-9154 deletion completed in 6.086610126s • [SLOW TEST:12.469 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:04:47.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 3 14:04:47.104: INFO: Waiting up to 5m0s for pod "client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df" in namespace "containers-3186" to be "success or failure" Jun 3 14:04:47.111: INFO: Pod "client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df": Phase="Pending", Reason="", readiness=false. Elapsed: 7.483457ms Jun 3 14:04:49.115: INFO: Pod "client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011165888s Jun 3 14:04:51.119: INFO: Pod "client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014805395s STEP: Saw pod success Jun 3 14:04:51.119: INFO: Pod "client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df" satisfied condition "success or failure" Jun 3 14:04:51.121: INFO: Trying to get logs from node iruya-worker2 pod client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df container test-container: STEP: delete the pod Jun 3 14:04:51.136: INFO: Waiting for pod client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df to disappear Jun 3 14:04:51.147: INFO: Pod client-containers-d381f9d4-f602-4d96-8def-5fe22ce220df no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:04:51.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3186" for this suite. Jun 3 14:04:57.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:04:57.251: INFO: namespace containers-3186 deletion completed in 6.101084315s • [SLOW TEST:10.240 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:04:57.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 3 14:04:57.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6411' Jun 3 14:04:59.955: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 3 14:04:59.955: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 3 14:04:59.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6411' Jun 3 14:05:00.092: INFO: stderr: "" Jun 3 14:05:00.092: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:05:00.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6411" for this suite. Jun 3 14:05:06.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:05:06.243: INFO: namespace kubectl-6411 deletion completed in 6.13501845s • [SLOW TEST:8.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:05:06.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h5jkv in namespace proxy-6222 I0603 14:05:06.382250 6 runners.go:180] Created replication controller with name: proxy-service-h5jkv, namespace: proxy-6222, replica count: 1 I0603 14:05:07.432746 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 14:05:08.432980 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 14:05:09.433567 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:10.433892 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:11.434166 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:12.434451 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:13.434646 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:14.434919 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:15.435133 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:16.435350 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:17.435537 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 14:05:18.435702 6 runners.go:180] proxy-service-h5jkv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 14:05:18.452: INFO: setup took 12.15757142s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 3 14:05:18.481: INFO: (0) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 27.508947ms) Jun 3 14:05:18.481: INFO: (0) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 27.613982ms) Jun 3 14:05:18.482: INFO: (0) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 29.133847ms) Jun 3 14:05:18.482: INFO: (0) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 29.036984ms) Jun 3 14:05:18.482: INFO: (0) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 29.128935ms) Jun 3 14:05:18.484: INFO: (0) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 30.514474ms) Jun 3 14:05:18.484: INFO: (0) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 30.24141ms) Jun 3 14:05:18.484: INFO: (0) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 30.234092ms) Jun 3 14:05:18.485: INFO: (0) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 32.251778ms) Jun 3 14:05:18.486: INFO: (0) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 32.503021ms) Jun 3 14:05:18.486: INFO: (0) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 32.610838ms) Jun 3 14:05:18.491: INFO: (0) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 38.71111ms) Jun 3 14:05:18.491: INFO: (0) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 38.040133ms) Jun 3 14:05:18.492: INFO: (0) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 38.189549ms) Jun 3 14:05:18.492: INFO: (0) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 38.491291ms) Jun 3 14:05:18.492: INFO: (0) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 6.388106ms) Jun 3 14:05:18.499: INFO: (1) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 6.532461ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 7.141704ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 7.091648ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 7.114956ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 7.122991ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 7.13368ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 7.250875ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 7.200982ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 7.283861ms) Jun 3 14:05:18.500: INFO: (1) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 5.372056ms) Jun 3 14:05:18.506: INFO: (2) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 5.609835ms) Jun 3 14:05:18.506: INFO: (2) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 6.879824ms) Jun 3 14:05:18.507: INFO: (2) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 6.852924ms) Jun 3 14:05:18.507: INFO: (2) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 6.848366ms) Jun 3 14:05:18.507: INFO: (2) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 6.931771ms) Jun 3 14:05:18.509: INFO: (2) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 9.497839ms) Jun 3 14:05:18.514: INFO: (3) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.333895ms) Jun 3 14:05:18.514: INFO: (3) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.415122ms) Jun 3 14:05:18.514: INFO: (3) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 4.344765ms) Jun 3 14:05:18.514: INFO: (3) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.31071ms) Jun 3 14:05:18.514: INFO: (3) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test<... (200; 5.130609ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 5.086906ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 5.08168ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 5.145968ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 5.260227ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 5.307775ms) Jun 3 14:05:18.515: INFO: (3) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 5.381933ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 3.420738ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 3.570686ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 3.628946ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 3.707879ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.032931ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 3.890864ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 3.925952ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 3.979268ms) Jun 3 14:05:18.519: INFO: (4) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.881843ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 4.542569ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 4.62605ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 4.649784ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 4.812541ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 4.89562ms) Jun 3 14:05:18.520: INFO: (4) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 5.024469ms) Jun 3 14:05:18.522: INFO: (5) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 2.079161ms) Jun 3 14:05:18.525: INFO: (5) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.826473ms) Jun 3 14:05:18.525: INFO: (5) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.98728ms) Jun 3 14:05:18.525: INFO: (5) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 5.023165ms) Jun 3 14:05:18.525: INFO: (5) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 5.121774ms) Jun 3 14:05:18.525: INFO: (5) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 5.092137ms) Jun 3 14:05:18.526: INFO: (5) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 5.406669ms) Jun 3 14:05:18.526: INFO: (5) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 5.366954ms) Jun 3 14:05:18.526: INFO: (5) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 5.473183ms) Jun 3 14:05:18.526: INFO: (5) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 5.416513ms) Jun 3 14:05:18.526: INFO: (5) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 5.864027ms) Jun 3 14:05:18.528: INFO: (5) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 7.500396ms) Jun 3 14:05:18.528: INFO: (5) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 7.609197ms) Jun 3 14:05:18.528: INFO: (5) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 7.57877ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 8.927398ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 9.068464ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 9.13601ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 9.215368ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 9.227703ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 9.189727ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 9.375912ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 9.300704ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 9.257498ms) Jun 3 14:05:18.537: INFO: (6) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 3.689358ms) Jun 3 14:05:18.542: INFO: (7) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test<... (200; 4.122641ms) Jun 3 14:05:18.542: INFO: (7) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.08456ms) Jun 3 14:05:18.542: INFO: (7) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.225982ms) Jun 3 14:05:18.543: INFO: (7) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.30714ms) Jun 3 14:05:18.543: INFO: (7) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 4.360799ms) Jun 3 14:05:18.543: INFO: (7) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.327719ms) Jun 3 14:05:18.544: INFO: (7) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 6.127332ms) Jun 3 14:05:18.544: INFO: (7) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 6.199879ms) Jun 3 14:05:18.544: INFO: (7) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 6.170542ms) Jun 3 14:05:18.544: INFO: (7) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 6.184314ms) Jun 3 14:05:18.544: INFO: (7) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 6.252341ms) Jun 3 14:05:18.545: INFO: (7) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 6.305063ms) Jun 3 14:05:18.548: INFO: (8) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 3.132827ms) Jun 3 14:05:18.548: INFO: (8) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.23058ms) Jun 3 14:05:18.548: INFO: (8) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.359565ms) Jun 3 14:05:18.548: INFO: (8) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 3.463255ms) Jun 3 14:05:18.548: INFO: (8) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 3.430599ms) Jun 3 14:05:18.549: INFO: (8) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 5.571904ms) Jun 3 14:05:18.550: INFO: (8) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 5.550992ms) Jun 3 14:05:18.554: INFO: (8) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 9.375626ms) Jun 3 14:05:18.556: INFO: (8) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 11.378346ms) Jun 3 14:05:18.556: INFO: (8) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 11.43809ms) Jun 3 14:05:18.556: INFO: (8) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 11.507389ms) Jun 3 14:05:18.556: INFO: (8) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 11.4016ms) Jun 3 14:05:18.557: INFO: (8) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 12.385324ms) Jun 3 14:05:18.566: INFO: (9) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 8.466341ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 10.220177ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 10.279332ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 10.283252ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 10.301908ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 10.45395ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 10.713352ms) Jun 3 14:05:18.568: INFO: (9) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 2.66021ms) Jun 3 14:05:18.573: INFO: (10) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 2.925657ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 2.364835ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 3.138261ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 2.998991ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 3.954429ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 3.477257ms) Jun 3 14:05:18.574: INFO: (10) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 3.387493ms) Jun 3 14:05:18.575: INFO: (10) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 3.055128ms) Jun 3 14:05:18.575: INFO: (10) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 3.165214ms) Jun 3 14:05:18.575: INFO: (10) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 3.599421ms) Jun 3 14:05:18.577: INFO: (11) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 3.768812ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.880247ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 3.99006ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.003783ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.213381ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 4.259675ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 4.270813ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.292674ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.396578ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 4.355007ms) Jun 3 14:05:18.579: INFO: (11) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 4.411641ms) Jun 3 14:05:18.618: INFO: (11) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 42.954386ms) Jun 3 14:05:18.618: INFO: (11) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 42.895049ms) Jun 3 14:05:18.618: INFO: (11) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 43.387927ms) Jun 3 14:05:18.619: INFO: (11) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 43.91834ms) Jun 3 14:05:18.623: INFO: (12) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.161743ms) Jun 3 14:05:18.626: INFO: (12) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 6.71933ms) Jun 3 14:05:18.626: INFO: (12) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 7.228729ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 8.876526ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 8.791514ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 9.269606ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 9.208746ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 9.168048ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 9.190061ms) Jun 3 14:05:18.628: INFO: (12) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 9.443765ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 10.145792ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 10.364854ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 10.164645ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 10.171299ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 10.258339ms) Jun 3 14:05:18.629: INFO: (12) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 24.9413ms) Jun 3 14:05:18.654: INFO: (13) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 25.05509ms) Jun 3 14:05:18.655: INFO: (13) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 25.287163ms) Jun 3 14:05:18.655: INFO: (13) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 25.792565ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 26.59034ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 26.550435ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 26.627883ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 26.810555ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 26.962528ms) Jun 3 14:05:18.656: INFO: (13) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 27.019884ms) Jun 3 14:05:18.657: INFO: (13) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 27.833924ms) Jun 3 14:05:18.657: INFO: (13) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 27.832475ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 8.137813ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 8.193802ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 8.265588ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 8.29684ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 8.359273ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test<... (200; 8.333665ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 8.411439ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 8.456869ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 8.568611ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 8.405739ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 8.442179ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 8.526072ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 8.380612ms) Jun 3 14:05:18.666: INFO: (14) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 8.473302ms) Jun 3 14:05:18.669: INFO: (15) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 3.258808ms) Jun 3 14:05:18.669: INFO: (15) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 3.399529ms) Jun 3 14:05:18.670: INFO: (15) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 4.0093ms) Jun 3 14:05:18.670: INFO: (15) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 4.398667ms) Jun 3 14:05:18.670: INFO: (15) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.399211ms) Jun 3 14:05:18.671: INFO: (15) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 4.792563ms) Jun 3 14:05:18.671: INFO: (15) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.843425ms) Jun 3 14:05:18.671: INFO: (15) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 5.127777ms) Jun 3 14:05:18.671: INFO: (15) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 5.278533ms) Jun 3 14:05:18.671: INFO: (15) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 5.215738ms) Jun 3 14:05:18.675: INFO: (16) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.904708ms) Jun 3 14:05:18.675: INFO: (16) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test<... (200; 4.857501ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.864563ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 4.890118ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 4.985526ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 4.906545ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 4.936013ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 5.004036ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 4.992801ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 4.952629ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 5.098698ms) Jun 3 14:05:18.676: INFO: (16) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 5.063134ms) Jun 3 14:05:18.677: INFO: (16) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 5.521044ms) Jun 3 14:05:18.680: INFO: (17) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.294423ms) Jun 3 14:05:18.680: INFO: (17) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 3.512208ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 3.549438ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 3.547964ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 3.596528ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 3.664584ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 3.997792ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 4.00293ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.191613ms) Jun 3 14:05:18.681: INFO: (17) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test<... (200; 5.960724ms) Jun 3 14:05:18.689: INFO: (18) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: test (200; 7.375925ms) Jun 3 14:05:18.690: INFO: (18) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:1080/proxy/: ... (200; 7.385697ms) Jun 3 14:05:18.691: INFO: (18) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 7.851663ms) Jun 3 14:05:18.691: INFO: (18) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 8.172238ms) Jun 3 14:05:18.691: INFO: (18) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 8.544337ms) Jun 3 14:05:18.692: INFO: (18) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 8.521765ms) Jun 3 14:05:18.692: INFO: (18) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 8.539051ms) Jun 3 14:05:18.692: INFO: (18) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 8.54025ms) Jun 3 14:05:18.695: INFO: (19) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:443/proxy/: ... (200; 3.214911ms) Jun 3 14:05:18.695: INFO: (19) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname1/proxy/: tls baz (200; 3.395977ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname1/proxy/: foo (200; 4.054658ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.136641ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.078142ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:462/proxy/: tls qux (200; 4.109587ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:160/proxy/: foo (200; 4.135769ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx/proxy/: test (200; 4.127313ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/http:proxy-service-h5jkv-2j9sx:162/proxy/: bar (200; 4.201594ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/proxy-service-h5jkv-2j9sx:1080/proxy/: test<... (200; 4.133504ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/services/proxy-service-h5jkv:portname2/proxy/: bar (200; 4.277956ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname2/proxy/: bar (200; 4.304386ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/services/http:proxy-service-h5jkv:portname1/proxy/: foo (200; 4.337628ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/services/https:proxy-service-h5jkv:tlsportname2/proxy/: tls qux (200; 4.48408ms) Jun 3 14:05:18.696: INFO: (19) /api/v1/namespaces/proxy-6222/pods/https:proxy-service-h5jkv-2j9sx:460/proxy/: tls baz (200; 4.594211ms) STEP: deleting ReplicationController proxy-service-h5jkv in namespace proxy-6222, will wait for the garbage collector to delete the pods Jun 3 14:05:18.762: INFO: Deleting ReplicationController proxy-service-h5jkv took: 13.727251ms Jun 3 14:05:19.062: INFO: Terminating ReplicationController proxy-service-h5jkv pods took: 300.265839ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:05:32.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6222" for this suite. Jun 3 14:05:38.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:05:38.360: INFO: namespace proxy-6222 deletion completed in 6.092755017s • [SLOW TEST:32.117 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:05:38.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4218 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4218 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4218 Jun 3 14:05:38.471: INFO: Found 0 stateful pods, waiting for 1 Jun 3 14:05:48.476: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 3 14:05:48.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:05:48.761: INFO: stderr: "I0603 14:05:48.605445 2774 log.go:172] (0xc00013adc0) (0xc0001ba820) Create stream\nI0603 14:05:48.605543 2774 log.go:172] (0xc00013adc0) (0xc0001ba820) Stream added, broadcasting: 1\nI0603 14:05:48.607731 2774 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0603 14:05:48.607771 2774 log.go:172] (0xc00013adc0) (0xc000a88000) Create stream\nI0603 14:05:48.608247 2774 log.go:172] (0xc00013adc0) (0xc000a88000) Stream added, broadcasting: 3\nI0603 14:05:48.609981 2774 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0603 14:05:48.610010 2774 log.go:172] (0xc00013adc0) (0xc000216280) Create stream\nI0603 14:05:48.610019 2774 log.go:172] (0xc00013adc0) (0xc000216280) Stream added, broadcasting: 5\nI0603 14:05:48.610828 2774 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0603 14:05:48.726046 2774 log.go:172] (0xc00013adc0) Data frame received for 5\nI0603 14:05:48.726078 2774 log.go:172] (0xc000216280) (5) Data frame handling\nI0603 14:05:48.726099 2774 log.go:172] (0xc000216280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:05:48.753068 2774 log.go:172] (0xc00013adc0) Data frame received for 3\nI0603 14:05:48.753105 2774 log.go:172] (0xc000a88000) (3) Data frame handling\nI0603 14:05:48.753314 2774 log.go:172] (0xc000a88000) (3) Data frame sent\nI0603 14:05:48.753352 2774 log.go:172] (0xc00013adc0) Data frame received for 3\nI0603 14:05:48.753376 2774 log.go:172] (0xc000a88000) (3) Data frame handling\nI0603 14:05:48.753437 2774 log.go:172] (0xc00013adc0) Data frame received for 5\nI0603 14:05:48.753462 2774 log.go:172] (0xc000216280) (5) Data frame handling\nI0603 14:05:48.755200 2774 log.go:172] (0xc00013adc0) Data frame received for 1\nI0603 14:05:48.755230 2774 log.go:172] (0xc0001ba820) (1) Data frame handling\nI0603 14:05:48.755256 2774 log.go:172] (0xc0001ba820) (1) Data frame sent\nI0603 14:05:48.755281 2774 log.go:172] (0xc00013adc0) (0xc0001ba820) Stream removed, broadcasting: 1\nI0603 14:05:48.755307 2774 log.go:172] (0xc00013adc0) Go away received\nI0603 14:05:48.755662 2774 log.go:172] (0xc00013adc0) (0xc0001ba820) Stream removed, broadcasting: 1\nI0603 14:05:48.755687 2774 log.go:172] (0xc00013adc0) (0xc000a88000) Stream removed, broadcasting: 3\nI0603 14:05:48.755696 2774 log.go:172] (0xc00013adc0) (0xc000216280) Stream removed, broadcasting: 5\n" Jun 3 14:05:48.761: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:05:48.761: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 14:05:48.779: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 3 14:05:58.784: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 14:05:58.784: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 14:05:58.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999549s Jun 3 14:05:59.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989773593s Jun 3 14:06:00.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98485035s Jun 3 14:06:01.821: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979742792s Jun 3 14:06:02.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975879769s Jun 3 14:06:03.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.971620929s Jun 3 14:06:04.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.966946701s Jun 3 14:06:05.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962223846s Jun 3 14:06:06.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957399821s Jun 3 14:06:07.849: INFO: Verifying statefulset ss doesn't scale past 1 for another 953.738449ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4218 Jun 3 14:06:08.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:06:09.102: INFO: stderr: "I0603 14:06:08.995341 2795 log.go:172] (0xc000848370) (0xc0007d8640) Create stream\nI0603 14:06:08.995393 2795 log.go:172] (0xc000848370) (0xc0007d8640) Stream added, broadcasting: 1\nI0603 14:06:08.997574 2795 log.go:172] (0xc000848370) Reply frame received for 1\nI0603 14:06:08.997609 2795 log.go:172] (0xc000848370) (0xc0005f0640) Create stream\nI0603 14:06:08.997625 2795 log.go:172] (0xc000848370) (0xc0005f0640) Stream added, broadcasting: 3\nI0603 14:06:08.998545 2795 log.go:172] (0xc000848370) Reply frame received for 3\nI0603 14:06:08.998581 2795 log.go:172] (0xc000848370) (0xc0007d86e0) Create stream\nI0603 14:06:08.998609 2795 log.go:172] (0xc000848370) (0xc0007d86e0) Stream added, broadcasting: 5\nI0603 14:06:08.999956 2795 log.go:172] (0xc000848370) Reply frame received for 5\nI0603 14:06:09.094756 2795 log.go:172] (0xc000848370) Data frame received for 5\nI0603 14:06:09.094796 2795 log.go:172] (0xc0007d86e0) (5) Data frame handling\nI0603 14:06:09.094813 2795 log.go:172] (0xc0007d86e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:06:09.094831 2795 log.go:172] (0xc000848370) Data frame received for 3\nI0603 14:06:09.094839 2795 log.go:172] (0xc0005f0640) (3) Data frame handling\nI0603 14:06:09.094848 2795 log.go:172] (0xc0005f0640) (3) Data frame sent\nI0603 14:06:09.094856 2795 log.go:172] (0xc000848370) Data frame received for 3\nI0603 14:06:09.094865 2795 log.go:172] (0xc0005f0640) (3) Data frame handling\nI0603 14:06:09.094881 2795 log.go:172] (0xc000848370) Data frame received for 5\nI0603 14:06:09.094891 2795 log.go:172] (0xc0007d86e0) (5) Data frame handling\nI0603 14:06:09.096204 2795 log.go:172] (0xc000848370) Data frame received for 1\nI0603 14:06:09.096235 2795 log.go:172] (0xc0007d8640) (1) Data frame handling\nI0603 14:06:09.096250 2795 log.go:172] (0xc0007d8640) (1) Data frame sent\nI0603 14:06:09.096263 2795 log.go:172] (0xc000848370) (0xc0007d8640) Stream removed, broadcasting: 1\nI0603 14:06:09.096364 2795 log.go:172] (0xc000848370) Go away received\nI0603 14:06:09.096638 2795 log.go:172] (0xc000848370) (0xc0007d8640) Stream removed, broadcasting: 1\nI0603 14:06:09.096656 2795 log.go:172] (0xc000848370) (0xc0005f0640) Stream removed, broadcasting: 3\nI0603 14:06:09.096665 2795 log.go:172] (0xc000848370) (0xc0007d86e0) Stream removed, broadcasting: 5\n" Jun 3 14:06:09.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:06:09.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:06:09.106: INFO: Found 1 stateful pods, waiting for 3 Jun 3 14:06:19.110: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:06:19.110: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:06:19.110: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 3 14:06:19.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:06:19.333: INFO: stderr: "I0603 14:06:19.249816 2816 log.go:172] (0xc000116f20) (0xc0005eeb40) Create stream\nI0603 14:06:19.249888 2816 log.go:172] (0xc000116f20) (0xc0005eeb40) Stream added, broadcasting: 1\nI0603 14:06:19.251913 2816 log.go:172] (0xc000116f20) Reply frame received for 1\nI0603 14:06:19.251953 2816 log.go:172] (0xc000116f20) (0xc0008c2000) Create stream\nI0603 14:06:19.251968 2816 log.go:172] (0xc000116f20) (0xc0008c2000) Stream added, broadcasting: 3\nI0603 14:06:19.252951 2816 log.go:172] (0xc000116f20) Reply frame received for 3\nI0603 14:06:19.252987 2816 log.go:172] (0xc000116f20) (0xc0001f4000) Create stream\nI0603 14:06:19.252998 2816 log.go:172] (0xc000116f20) (0xc0001f4000) Stream added, broadcasting: 5\nI0603 14:06:19.254205 2816 log.go:172] (0xc000116f20) Reply frame received for 5\nI0603 14:06:19.324250 2816 log.go:172] (0xc000116f20) Data frame received for 5\nI0603 14:06:19.324286 2816 log.go:172] (0xc0001f4000) (5) Data frame handling\nI0603 14:06:19.324298 2816 log.go:172] (0xc0001f4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:06:19.324367 2816 log.go:172] (0xc000116f20) Data frame received for 3\nI0603 14:06:19.325044 2816 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0603 14:06:19.325069 2816 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0603 14:06:19.325090 2816 log.go:172] (0xc000116f20) Data frame received for 3\nI0603 14:06:19.325336 2816 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0603 14:06:19.325408 2816 log.go:172] (0xc000116f20) Data frame received for 5\nI0603 14:06:19.325427 2816 log.go:172] (0xc0001f4000) (5) Data frame handling\nI0603 14:06:19.326266 2816 log.go:172] (0xc000116f20) Data frame received for 1\nI0603 14:06:19.326304 2816 log.go:172] (0xc0005eeb40) (1) Data frame handling\nI0603 14:06:19.326347 2816 log.go:172] (0xc0005eeb40) (1) Data frame sent\nI0603 14:06:19.326513 2816 log.go:172] (0xc000116f20) (0xc0005eeb40) Stream removed, broadcasting: 1\nI0603 14:06:19.326548 2816 log.go:172] (0xc000116f20) Go away received\nI0603 14:06:19.327471 2816 log.go:172] (0xc000116f20) (0xc0005eeb40) Stream removed, broadcasting: 1\nI0603 14:06:19.327495 2816 log.go:172] (0xc000116f20) (0xc0008c2000) Stream removed, broadcasting: 3\nI0603 14:06:19.327508 2816 log.go:172] (0xc000116f20) (0xc0001f4000) Stream removed, broadcasting: 5\n" Jun 3 14:06:19.333: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:06:19.333: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 14:06:19.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:06:19.584: INFO: stderr: "I0603 14:06:19.454364 2836 log.go:172] (0xc0001168f0) (0xc0005d0960) Create stream\nI0603 14:06:19.454428 2836 log.go:172] (0xc0001168f0) (0xc0005d0960) Stream added, broadcasting: 1\nI0603 14:06:19.457421 2836 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0603 14:06:19.461648 2836 log.go:172] (0xc0001168f0) (0xc0002de140) Create stream\nI0603 14:06:19.461860 2836 log.go:172] (0xc0001168f0) (0xc0002de140) Stream added, broadcasting: 3\nI0603 14:06:19.464029 2836 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0603 14:06:19.464085 2836 log.go:172] (0xc0001168f0) (0xc0002de000) Create stream\nI0603 14:06:19.464099 2836 log.go:172] (0xc0001168f0) (0xc0002de000) Stream added, broadcasting: 5\nI0603 14:06:19.466286 2836 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0603 14:06:19.547518 2836 log.go:172] (0xc0001168f0) Data frame received for 5\nI0603 14:06:19.547554 2836 log.go:172] (0xc0002de000) (5) Data frame handling\nI0603 14:06:19.547576 2836 log.go:172] (0xc0002de000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:06:19.577007 2836 log.go:172] (0xc0001168f0) Data frame received for 5\nI0603 14:06:19.577062 2836 log.go:172] (0xc0002de000) (5) Data frame handling\nI0603 14:06:19.577094 2836 log.go:172] (0xc0001168f0) Data frame received for 3\nI0603 14:06:19.577283 2836 log.go:172] (0xc0002de140) (3) Data frame handling\nI0603 14:06:19.577321 2836 log.go:172] (0xc0002de140) (3) Data frame sent\nI0603 14:06:19.577342 2836 log.go:172] (0xc0001168f0) Data frame received for 3\nI0603 14:06:19.577362 2836 log.go:172] (0xc0002de140) (3) Data frame handling\nI0603 14:06:19.578804 2836 log.go:172] (0xc0001168f0) Data frame received for 1\nI0603 14:06:19.578829 2836 log.go:172] (0xc0005d0960) (1) Data frame handling\nI0603 14:06:19.578842 2836 log.go:172] (0xc0005d0960) (1) Data frame sent\nI0603 14:06:19.578859 2836 log.go:172] (0xc0001168f0) (0xc0005d0960) Stream removed, broadcasting: 1\nI0603 14:06:19.578878 2836 log.go:172] (0xc0001168f0) Go away received\nI0603 14:06:19.579139 2836 log.go:172] (0xc0001168f0) (0xc0005d0960) Stream removed, broadcasting: 1\nI0603 14:06:19.579153 2836 log.go:172] (0xc0001168f0) (0xc0002de140) Stream removed, broadcasting: 3\nI0603 14:06:19.579158 2836 log.go:172] (0xc0001168f0) (0xc0002de000) Stream removed, broadcasting: 5\n" Jun 3 14:06:19.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:06:19.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 14:06:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:06:19.886: INFO: stderr: "I0603 14:06:19.716587 2856 log.go:172] (0xc00013cfd0) (0xc00062eaa0) Create stream\nI0603 14:06:19.716663 2856 log.go:172] (0xc00013cfd0) (0xc00062eaa0) Stream added, broadcasting: 1\nI0603 14:06:19.720958 2856 log.go:172] (0xc00013cfd0) Reply frame received for 1\nI0603 14:06:19.721391 2856 log.go:172] (0xc00013cfd0) (0xc000a6c000) Create stream\nI0603 14:06:19.721434 2856 log.go:172] (0xc00013cfd0) (0xc000a6c000) Stream added, broadcasting: 3\nI0603 14:06:19.722593 2856 log.go:172] (0xc00013cfd0) Reply frame received for 3\nI0603 14:06:19.722646 2856 log.go:172] (0xc00013cfd0) (0xc000a6c0a0) Create stream\nI0603 14:06:19.722678 2856 log.go:172] (0xc00013cfd0) (0xc000a6c0a0) Stream added, broadcasting: 5\nI0603 14:06:19.723868 2856 log.go:172] (0xc00013cfd0) Reply frame received for 5\nI0603 14:06:19.802407 2856 log.go:172] (0xc00013cfd0) Data frame received for 5\nI0603 14:06:19.802439 2856 log.go:172] (0xc000a6c0a0) (5) Data frame handling\nI0603 14:06:19.802459 2856 log.go:172] (0xc000a6c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:06:19.875203 2856 log.go:172] (0xc00013cfd0) Data frame received for 3\nI0603 14:06:19.875230 2856 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0603 14:06:19.875242 2856 log.go:172] (0xc000a6c000) (3) Data frame sent\nI0603 14:06:19.875250 2856 log.go:172] (0xc00013cfd0) Data frame received for 3\nI0603 14:06:19.875256 2856 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0603 14:06:19.875315 2856 log.go:172] (0xc00013cfd0) Data frame received for 5\nI0603 14:06:19.875331 2856 log.go:172] (0xc000a6c0a0) (5) Data frame handling\nI0603 14:06:19.877444 2856 log.go:172] (0xc00013cfd0) Data frame received for 1\nI0603 14:06:19.877476 2856 log.go:172] (0xc00062eaa0) (1) Data frame handling\nI0603 14:06:19.877496 2856 log.go:172] (0xc00062eaa0) (1) Data frame sent\nI0603 14:06:19.877514 2856 log.go:172] (0xc00013cfd0) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0603 14:06:19.877534 2856 log.go:172] (0xc00013cfd0) Go away received\nI0603 14:06:19.878010 2856 log.go:172] (0xc00013cfd0) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0603 14:06:19.878051 2856 log.go:172] (0xc00013cfd0) (0xc000a6c000) Stream removed, broadcasting: 3\nI0603 14:06:19.878069 2856 log.go:172] (0xc00013cfd0) (0xc000a6c0a0) Stream removed, broadcasting: 5\n" Jun 3 14:06:19.886: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:06:19.886: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 14:06:19.886: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 14:06:19.893: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 3 14:06:29.900: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 14:06:29.900: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 3 14:06:29.900: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 3 14:06:29.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999289s Jun 3 14:06:30.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990940996s Jun 3 14:06:31.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98663463s Jun 3 14:06:32.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981518604s Jun 3 14:06:33.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976491206s Jun 3 14:06:34.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971628305s Jun 3 14:06:35.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953081833s Jun 3 14:06:36.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948179726s Jun 3 14:06:37.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.942679094s Jun 3 14:06:38.975: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.585371ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4218 Jun 3 14:06:39.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:06:40.225: INFO: stderr: "I0603 14:06:40.121668 2878 log.go:172] (0xc00013adc0) (0xc0002f8820) Create stream\nI0603 14:06:40.121746 2878 log.go:172] (0xc00013adc0) (0xc0002f8820) Stream added, broadcasting: 1\nI0603 14:06:40.126075 2878 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0603 14:06:40.126190 2878 log.go:172] (0xc00013adc0) (0xc0008a2000) Create stream\nI0603 14:06:40.126229 2878 log.go:172] (0xc00013adc0) (0xc0008a2000) Stream added, broadcasting: 3\nI0603 14:06:40.127513 2878 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0603 14:06:40.127598 2878 log.go:172] (0xc00013adc0) (0xc0002f88c0) Create stream\nI0603 14:06:40.127631 2878 log.go:172] (0xc00013adc0) (0xc0002f88c0) Stream added, broadcasting: 5\nI0603 14:06:40.129033 2878 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0603 14:06:40.217485 2878 log.go:172] (0xc00013adc0) Data frame received for 5\nI0603 14:06:40.217536 2878 log.go:172] (0xc0002f88c0) (5) Data frame handling\nI0603 14:06:40.217557 2878 log.go:172] (0xc0002f88c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:06:40.217583 2878 log.go:172] (0xc00013adc0) Data frame received for 3\nI0603 14:06:40.217604 2878 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0603 14:06:40.217618 2878 log.go:172] (0xc0008a2000) (3) Data frame sent\nI0603 14:06:40.217795 2878 log.go:172] (0xc00013adc0) Data frame received for 3\nI0603 14:06:40.217815 2878 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0603 14:06:40.217845 2878 log.go:172] (0xc00013adc0) Data frame received for 5\nI0603 14:06:40.217865 2878 log.go:172] (0xc0002f88c0) (5) Data frame handling\nI0603 14:06:40.219185 2878 log.go:172] (0xc00013adc0) Data frame received for 1\nI0603 14:06:40.219210 2878 log.go:172] (0xc0002f8820) (1) Data frame handling\nI0603 14:06:40.219228 2878 log.go:172] (0xc0002f8820) (1) Data frame sent\nI0603 14:06:40.219250 2878 log.go:172] (0xc00013adc0) (0xc0002f8820) Stream removed, broadcasting: 1\nI0603 14:06:40.219284 2878 log.go:172] (0xc00013adc0) Go away received\nI0603 14:06:40.219677 2878 log.go:172] (0xc00013adc0) (0xc0002f8820) Stream removed, broadcasting: 1\nI0603 14:06:40.219712 2878 log.go:172] (0xc00013adc0) (0xc0008a2000) Stream removed, broadcasting: 3\nI0603 14:06:40.219727 2878 log.go:172] (0xc00013adc0) (0xc0002f88c0) Stream removed, broadcasting: 5\n" Jun 3 14:06:40.225: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:06:40.225: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:06:40.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:06:40.421: INFO: stderr: "I0603 14:06:40.341945 2900 log.go:172] (0xc00012ae70) (0xc00082e640) Create stream\nI0603 14:06:40.341988 2900 log.go:172] (0xc00012ae70) (0xc00082e640) Stream added, broadcasting: 1\nI0603 14:06:40.344178 2900 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0603 14:06:40.344227 2900 log.go:172] (0xc00012ae70) (0xc00090c000) Create stream\nI0603 14:06:40.344239 2900 log.go:172] (0xc00012ae70) (0xc00090c000) Stream added, broadcasting: 3\nI0603 14:06:40.345075 2900 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0603 14:06:40.345275 2900 log.go:172] (0xc00012ae70) (0xc000810000) Create stream\nI0603 14:06:40.345312 2900 log.go:172] (0xc00012ae70) (0xc000810000) Stream added, broadcasting: 5\nI0603 14:06:40.345998 2900 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0603 14:06:40.414048 2900 log.go:172] (0xc00012ae70) Data frame received for 3\nI0603 14:06:40.414082 2900 log.go:172] (0xc00090c000) (3) Data frame handling\nI0603 14:06:40.414100 2900 log.go:172] (0xc00090c000) (3) Data frame sent\nI0603 14:06:40.414113 2900 log.go:172] (0xc00012ae70) Data frame received for 3\nI0603 14:06:40.414167 2900 log.go:172] (0xc00090c000) (3) Data frame handling\nI0603 14:06:40.414283 2900 log.go:172] (0xc00012ae70) Data frame received for 5\nI0603 14:06:40.414317 2900 log.go:172] (0xc000810000) (5) Data frame handling\nI0603 14:06:40.414351 2900 log.go:172] (0xc000810000) (5) Data frame sent\nI0603 14:06:40.414388 2900 log.go:172] (0xc00012ae70) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:06:40.414410 2900 log.go:172] (0xc000810000) (5) Data frame handling\nI0603 14:06:40.415966 2900 log.go:172] (0xc00012ae70) Data frame received for 1\nI0603 14:06:40.415996 2900 log.go:172] (0xc00082e640) (1) Data frame handling\nI0603 14:06:40.416015 2900 log.go:172] (0xc00082e640) (1) Data frame sent\nI0603 14:06:40.416035 2900 log.go:172] (0xc00012ae70) (0xc00082e640) Stream removed, broadcasting: 1\nI0603 14:06:40.416067 2900 log.go:172] (0xc00012ae70) Go away received\nI0603 14:06:40.416472 2900 log.go:172] (0xc00012ae70) (0xc00082e640) Stream removed, broadcasting: 1\nI0603 14:06:40.416503 2900 log.go:172] (0xc00012ae70) (0xc00090c000) Stream removed, broadcasting: 3\nI0603 14:06:40.416512 2900 log.go:172] (0xc00012ae70) (0xc000810000) Stream removed, broadcasting: 5\n" Jun 3 14:06:40.421: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:06:40.421: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:06:40.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4218 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:06:40.654: INFO: stderr: "I0603 14:06:40.574003 2920 log.go:172] (0xc00012a6e0) (0xc00032c6e0) Create stream\nI0603 14:06:40.574047 2920 log.go:172] (0xc00012a6e0) (0xc00032c6e0) Stream added, broadcasting: 1\nI0603 14:06:40.576244 2920 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0603 14:06:40.576278 2920 log.go:172] (0xc00012a6e0) (0xc0007ea1e0) Create stream\nI0603 14:06:40.576288 2920 log.go:172] (0xc00012a6e0) (0xc0007ea1e0) Stream added, broadcasting: 3\nI0603 14:06:40.576951 2920 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0603 14:06:40.576978 2920 log.go:172] (0xc00012a6e0) (0xc00032c000) Create stream\nI0603 14:06:40.576987 2920 log.go:172] (0xc00012a6e0) (0xc00032c000) Stream added, broadcasting: 5\nI0603 14:06:40.577855 2920 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0603 14:06:40.645422 2920 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0603 14:06:40.645467 2920 log.go:172] (0xc0007ea1e0) (3) Data frame handling\nI0603 14:06:40.645478 2920 log.go:172] (0xc0007ea1e0) (3) Data frame sent\nI0603 14:06:40.645486 2920 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0603 14:06:40.645493 2920 log.go:172] (0xc0007ea1e0) (3) Data frame handling\nI0603 14:06:40.645517 2920 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0603 14:06:40.645550 2920 log.go:172] (0xc00032c000) (5) Data frame handling\nI0603 14:06:40.645572 2920 log.go:172] (0xc00032c000) (5) Data frame sent\nI0603 14:06:40.645591 2920 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0603 14:06:40.645607 2920 log.go:172] (0xc00032c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:06:40.647376 2920 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0603 14:06:40.647408 2920 log.go:172] (0xc00032c6e0) (1) Data frame handling\nI0603 14:06:40.647427 2920 log.go:172] (0xc00032c6e0) (1) Data frame sent\nI0603 14:06:40.647447 2920 log.go:172] (0xc00012a6e0) (0xc00032c6e0) Stream removed, broadcasting: 1\nI0603 14:06:40.647472 2920 log.go:172] (0xc00012a6e0) Go away received\nI0603 14:06:40.647833 2920 log.go:172] (0xc00012a6e0) (0xc00032c6e0) Stream removed, broadcasting: 1\nI0603 14:06:40.647851 2920 log.go:172] (0xc00012a6e0) (0xc0007ea1e0) Stream removed, broadcasting: 3\nI0603 14:06:40.647859 2920 log.go:172] (0xc00012a6e0) (0xc00032c000) Stream removed, broadcasting: 5\n" Jun 3 14:06:40.654: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:06:40.654: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:06:40.654: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 3 14:07:10.671: INFO: Deleting all statefulset in ns statefulset-4218 Jun 3 14:07:10.675: INFO: Scaling statefulset ss to 0 Jun 3 14:07:10.683: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 14:07:10.686: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:07:10.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4218" for this suite. Jun 3 14:07:16.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:07:16.787: INFO: namespace statefulset-4218 deletion completed in 6.081811732s • [SLOW TEST:98.427 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:07:16.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-4098751b-b9d5-49e3-8e9f-6ae61edc0ba7 STEP: Creating a pod to test consume secrets Jun 3 14:07:16.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c" in namespace "projected-9492" to be "success or failure" Jun 3 14:07:16.899: INFO: Pod "pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.986858ms Jun 3 14:07:18.903: INFO: Pod "pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020537687s Jun 3 14:07:20.909: INFO: Pod "pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025979537s STEP: Saw pod success Jun 3 14:07:20.909: INFO: Pod "pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c" satisfied condition "success or failure" Jun 3 14:07:20.913: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c container secret-volume-test: STEP: delete the pod Jun 3 14:07:20.936: INFO: Waiting for pod pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c to disappear Jun 3 14:07:20.951: INFO: Pod pod-projected-secrets-29e7dba4-dcbc-4313-801a-9bf34294572c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:07:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9492" for this suite. Jun 3 14:07:26.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:07:27.053: INFO: namespace projected-9492 deletion completed in 6.098454112s • [SLOW TEST:10.266 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:07:27.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-93dbe20b-7ea0-4723-94f7-ac614b0d4777 STEP: Creating a pod to test consume secrets Jun 3 14:07:27.118: INFO: Waiting up to 5m0s for pod "pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2" in namespace "secrets-7018" to be "success or failure" Jun 3 14:07:27.144: INFO: Pod "pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.727557ms Jun 3 14:07:29.149: INFO: Pod "pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031185922s Jun 3 14:07:31.158: INFO: Pod "pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040329033s STEP: Saw pod success Jun 3 14:07:31.158: INFO: Pod "pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2" satisfied condition "success or failure" Jun 3 14:07:31.162: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2 container secret-volume-test: STEP: delete the pod Jun 3 14:07:31.197: INFO: Waiting for pod pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2 to disappear Jun 3 14:07:31.216: INFO: Pod pod-secrets-ad6c676a-57df-44b5-b8e2-9c05e35632f2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:07:31.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7018" for this suite. Jun 3 14:07:37.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:07:37.304: INFO: namespace secrets-7018 deletion completed in 6.084146655s • [SLOW TEST:10.251 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:07:37.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 3 14:07:37.386: INFO: Waiting up to 5m0s for pod "pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c" in namespace "emptydir-6292" to be "success or failure" Jun 3 14:07:37.427: INFO: Pod "pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.524785ms Jun 3 14:07:39.430: INFO: Pod "pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043794123s Jun 3 14:07:41.435: INFO: Pod "pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048270382s STEP: Saw pod success Jun 3 14:07:41.435: INFO: Pod "pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c" satisfied condition "success or failure" Jun 3 14:07:41.437: INFO: Trying to get logs from node iruya-worker pod pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c container test-container: STEP: delete the pod Jun 3 14:07:41.476: INFO: Waiting for pod pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c to disappear Jun 3 14:07:41.504: INFO: Pod pod-dabc7ef0-aa14-4bd8-bca0-0804407e740c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:07:41.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6292" for this suite. Jun 3 14:07:47.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:07:47.584: INFO: namespace emptydir-6292 deletion completed in 6.076013495s • [SLOW TEST:10.280 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:07:47.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 3 14:07:55.873: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 14:07:55.882: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 14:07:57.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 14:07:57.886: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 14:07:59.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 14:07:59.886: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 14:08:01.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 14:08:01.886: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 14:08:03.882: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 14:08:03.887: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:08:03.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5728" for this suite. Jun 3 14:08:25.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:08:25.994: INFO: namespace container-lifecycle-hook-5728 deletion completed in 22.102876808s • [SLOW TEST:38.410 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:08:25.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9ddffb9f-a755-4924-ad9b-57eb4adf8582 STEP: Creating a pod to test consume secrets Jun 3 14:08:26.149: INFO: Waiting up to 5m0s for pod "pod-secrets-cce73732-280c-410f-b708-4f1fec605fff" in namespace "secrets-4534" to be "success or failure" Jun 3 14:08:26.152: INFO: Pod "pod-secrets-cce73732-280c-410f-b708-4f1fec605fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303432ms Jun 3 14:08:28.267: INFO: Pod "pod-secrets-cce73732-280c-410f-b708-4f1fec605fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117071106s Jun 3 14:08:30.271: INFO: Pod "pod-secrets-cce73732-280c-410f-b708-4f1fec605fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121635779s STEP: Saw pod success Jun 3 14:08:30.271: INFO: Pod "pod-secrets-cce73732-280c-410f-b708-4f1fec605fff" satisfied condition "success or failure" Jun 3 14:08:30.274: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cce73732-280c-410f-b708-4f1fec605fff container secret-volume-test: STEP: delete the pod Jun 3 14:08:30.306: INFO: Waiting for pod pod-secrets-cce73732-280c-410f-b708-4f1fec605fff to disappear Jun 3 14:08:30.314: INFO: Pod pod-secrets-cce73732-280c-410f-b708-4f1fec605fff no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:08:30.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4534" for this suite. Jun 3 14:08:36.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:08:36.411: INFO: namespace secrets-4534 deletion completed in 6.094570652s STEP: Destroying namespace "secret-namespace-7354" for this suite. Jun 3 14:08:42.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:08:42.523: INFO: namespace secret-namespace-7354 deletion completed in 6.111995603s • [SLOW TEST:16.529 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:08:42.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-48d7cbf8-1218-4db1-b9ef-2f890b8f39a4 STEP: Creating secret with name s-test-opt-upd-1871929c-9716-4c0d-aa93-8bb61b47bee1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-48d7cbf8-1218-4db1-b9ef-2f890b8f39a4 STEP: Updating secret s-test-opt-upd-1871929c-9716-4c0d-aa93-8bb61b47bee1 STEP: Creating secret with name s-test-opt-create-9cce604a-2fdf-4025-b66a-5087159e0259 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:09:55.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7186" for this suite. Jun 3 14:10:17.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:10:17.165: INFO: namespace secrets-7186 deletion completed in 22.096047424s • [SLOW TEST:94.641 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:10:17.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 3 14:10:17.251: INFO: Waiting up to 5m0s for pod "pod-451e149a-7b46-4fa9-ace7-191be9f90790" in namespace "emptydir-6071" to be "success or failure" Jun 3 14:10:17.271: INFO: Pod "pod-451e149a-7b46-4fa9-ace7-191be9f90790": Phase="Pending", Reason="", readiness=false. Elapsed: 19.848011ms Jun 3 14:10:19.276: INFO: Pod "pod-451e149a-7b46-4fa9-ace7-191be9f90790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024266009s Jun 3 14:10:21.280: INFO: Pod "pod-451e149a-7b46-4fa9-ace7-191be9f90790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028144759s STEP: Saw pod success Jun 3 14:10:21.280: INFO: Pod "pod-451e149a-7b46-4fa9-ace7-191be9f90790" satisfied condition "success or failure" Jun 3 14:10:21.283: INFO: Trying to get logs from node iruya-worker2 pod pod-451e149a-7b46-4fa9-ace7-191be9f90790 container test-container: STEP: delete the pod Jun 3 14:10:21.306: INFO: Waiting for pod pod-451e149a-7b46-4fa9-ace7-191be9f90790 to disappear Jun 3 14:10:21.310: INFO: Pod pod-451e149a-7b46-4fa9-ace7-191be9f90790 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:10:21.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6071" for this suite. Jun 3 14:10:27.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:10:27.404: INFO: namespace emptydir-6071 deletion completed in 6.090728869s • [SLOW TEST:10.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:10:27.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3218aca5-9388-4117-b0ee-5a604f202bf6 STEP: Creating a pod to test consume secrets Jun 3 14:10:27.492: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc" in namespace "projected-5414" to be "success or failure" Jun 3 14:10:27.496: INFO: Pod "pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.751862ms Jun 3 14:10:29.500: INFO: Pod "pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008050553s Jun 3 14:10:31.505: INFO: Pod "pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012890228s STEP: Saw pod success Jun 3 14:10:31.505: INFO: Pod "pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc" satisfied condition "success or failure" Jun 3 14:10:31.508: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc container projected-secret-volume-test: STEP: delete the pod Jun 3 14:10:31.533: INFO: Waiting for pod pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc to disappear Jun 3 14:10:31.538: INFO: Pod pod-projected-secrets-12f5a4d7-c064-4099-bd43-21759f95cddc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:10:31.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5414" for this suite. Jun 3 14:10:37.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:10:37.619: INFO: namespace projected-5414 deletion completed in 6.078487834s • [SLOW TEST:10.213 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:10:37.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0603 14:10:47.742472 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 14:10:47.742: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:10:47.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4917" for this suite. Jun 3 14:10:53.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:10:53.835: INFO: namespace gc-4917 deletion completed in 6.089208004s • [SLOW TEST:16.216 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:10:53.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:10:58.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4111" for this suite. Jun 3 14:11:04.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:11:04.140: INFO: namespace emptydir-wrapper-4111 deletion completed in 6.128228434s • [SLOW TEST:10.305 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:11:04.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 3 14:11:04.271: INFO: Waiting up to 5m0s for pod "downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3" in namespace "downward-api-1317" to be "success or failure" Jun 3 14:11:04.292: INFO: Pod "downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.54895ms Jun 3 14:11:06.297: INFO: Pod "downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026717568s Jun 3 14:11:08.302: INFO: Pod "downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031132408s STEP: Saw pod success Jun 3 14:11:08.302: INFO: Pod "downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3" satisfied condition "success or failure" Jun 3 14:11:08.306: INFO: Trying to get logs from node iruya-worker pod downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3 container dapi-container: STEP: delete the pod Jun 3 14:11:08.325: INFO: Waiting for pod downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3 to disappear Jun 3 14:11:08.329: INFO: Pod downward-api-49ed8125-6a38-461d-b1c9-18c0cfe642f3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:11:08.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1317" for this suite. Jun 3 14:11:14.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:11:14.469: INFO: namespace downward-api-1317 deletion completed in 6.137251146s • [SLOW TEST:10.327 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:11:14.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-dz6t STEP: Creating a pod to test atomic-volume-subpath Jun 3 14:11:14.544: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dz6t" in namespace "subpath-453" to be "success or failure" Jun 3 14:11:14.562: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Pending", Reason="", readiness=false. Elapsed: 18.144943ms Jun 3 14:11:16.592: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047690104s Jun 3 14:11:18.596: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 4.052286608s Jun 3 14:11:20.604: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 6.060092323s Jun 3 14:11:22.610: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 8.06597732s Jun 3 14:11:24.615: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 10.07039431s Jun 3 14:11:26.619: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 12.074621145s Jun 3 14:11:28.623: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 14.078790924s Jun 3 14:11:30.626: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 16.082301868s Jun 3 14:11:32.630: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 18.086280835s Jun 3 14:11:34.635: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 20.090555932s Jun 3 14:11:36.639: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Running", Reason="", readiness=true. Elapsed: 22.094947271s Jun 3 14:11:38.643: INFO: Pod "pod-subpath-test-configmap-dz6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.099190173s STEP: Saw pod success Jun 3 14:11:38.643: INFO: Pod "pod-subpath-test-configmap-dz6t" satisfied condition "success or failure" Jun 3 14:11:38.647: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-dz6t container test-container-subpath-configmap-dz6t: STEP: delete the pod Jun 3 14:11:38.671: INFO: Waiting for pod pod-subpath-test-configmap-dz6t to disappear Jun 3 14:11:38.677: INFO: Pod pod-subpath-test-configmap-dz6t no longer exists STEP: Deleting pod pod-subpath-test-configmap-dz6t Jun 3 14:11:38.677: INFO: Deleting pod "pod-subpath-test-configmap-dz6t" in namespace "subpath-453" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:11:38.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-453" for this suite. Jun 3 14:11:44.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:11:44.810: INFO: namespace subpath-453 deletion completed in 6.126662148s • [SLOW TEST:30.340 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:11:44.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:11:44.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517" in namespace "downward-api-3742" to be "success or failure" Jun 3 14:11:44.875: INFO: Pod "downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364179ms Jun 3 14:11:46.880: INFO: Pod "downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008290829s Jun 3 14:11:48.887: INFO: Pod "downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014996056s STEP: Saw pod success Jun 3 14:11:48.887: INFO: Pod "downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517" satisfied condition "success or failure" Jun 3 14:11:48.890: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517 container client-container: STEP: delete the pod Jun 3 14:11:48.946: INFO: Waiting for pod downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517 to disappear Jun 3 14:11:48.959: INFO: Pod downwardapi-volume-4931de01-0259-442a-91eb-d9d05cfe9517 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:11:48.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3742" for this suite. Jun 3 14:11:54.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:11:55.064: INFO: namespace downward-api-3742 deletion completed in 6.102062611s • [SLOW TEST:10.254 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:11:55.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-1af2b737-ce94-45be-a806-39417884e963 in namespace container-probe-3028 Jun 3 14:11:59.155: INFO: Started pod liveness-1af2b737-ce94-45be-a806-39417884e963 in namespace container-probe-3028 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 14:11:59.158: INFO: Initial restart count of pod liveness-1af2b737-ce94-45be-a806-39417884e963 is 0 Jun 3 14:12:17.406: INFO: Restart count of pod container-probe-3028/liveness-1af2b737-ce94-45be-a806-39417884e963 is now 1 (18.247610214s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:12:17.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3028" for this suite. Jun 3 14:12:23.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:12:23.570: INFO: namespace container-probe-3028 deletion completed in 6.110289157s • [SLOW TEST:28.505 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:12:23.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1b812481-a9c1-4245-a2db-d1cb0b090be5 STEP: Creating a pod to test consume configMaps Jun 3 14:12:23.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf" in namespace "configmap-2419" to be "success or failure" Jun 3 14:12:23.724: INFO: Pod "pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423614ms Jun 3 14:12:25.728: INFO: Pod "pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007570526s Jun 3 14:12:27.732: INFO: Pod "pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012290586s STEP: Saw pod success Jun 3 14:12:27.733: INFO: Pod "pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf" satisfied condition "success or failure" Jun 3 14:12:27.736: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf container configmap-volume-test: STEP: delete the pod Jun 3 14:12:27.774: INFO: Waiting for pod pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf to disappear Jun 3 14:12:27.795: INFO: Pod pod-configmaps-20df12cb-ade6-44be-b431-170e40707acf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:12:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2419" for this suite. Jun 3 14:12:33.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:12:33.907: INFO: namespace configmap-2419 deletion completed in 6.108124131s • [SLOW TEST:10.337 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:12:33.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 3 14:12:33.987: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453182,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 3 14:12:33.987: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453182,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 3 14:12:43.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453203,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 3 14:12:43.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453203,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 3 14:12:54.005: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453224,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 3 14:12:54.005: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453224,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 3 14:13:04.013: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453246,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 3 14:13:04.013: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-a,UID:a0cb8439-6c0a-4441-a3a1-b414aad07c07,ResourceVersion:14453246,Generation:0,CreationTimestamp:2020-06-03 14:12:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 3 14:13:14.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-b,UID:6478f53f-28fc-4048-ab3b-033e2481a596,ResourceVersion:14453266,Generation:0,CreationTimestamp:2020-06-03 14:13:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 3 14:13:14.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-b,UID:6478f53f-28fc-4048-ab3b-033e2481a596,ResourceVersion:14453266,Generation:0,CreationTimestamp:2020-06-03 14:13:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 3 14:13:24.029: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-b,UID:6478f53f-28fc-4048-ab3b-033e2481a596,ResourceVersion:14453286,Generation:0,CreationTimestamp:2020-06-03 14:13:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 3 14:13:24.029: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9528,SelfLink:/api/v1/namespaces/watch-9528/configmaps/e2e-watch-test-configmap-b,UID:6478f53f-28fc-4048-ab3b-033e2481a596,ResourceVersion:14453286,Generation:0,CreationTimestamp:2020-06-03 14:13:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:13:34.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9528" for this suite. Jun 3 14:13:40.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:13:40.120: INFO: namespace watch-9528 deletion completed in 6.085506876s • [SLOW TEST:66.213 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:13:40.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 3 14:13:40.823: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3554" to be "success or failure" Jun 3 14:13:40.826: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.981995ms Jun 3 14:13:43.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255764903s Jun 3 14:13:45.084: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260414532s Jun 3 14:13:47.088: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265046257s STEP: Saw pod success Jun 3 14:13:47.088: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 3 14:13:47.092: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 3 14:13:47.118: INFO: Waiting for pod pod-host-path-test to disappear Jun 3 14:13:47.123: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:13:47.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3554" for this suite. Jun 3 14:13:53.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:13:53.260: INFO: namespace hostpath-3554 deletion completed in 6.134418419s • [SLOW TEST:13.140 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:13:53.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:13:53.303: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.294592ms) Jun 3 14:13:53.306: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.738355ms) Jun 3 14:13:53.310: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.496819ms) Jun 3 14:13:53.313: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.376027ms) Jun 3 14:13:53.317: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.848795ms) Jun 3 14:13:53.320: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.827855ms) Jun 3 14:13:53.323: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.869398ms) Jun 3 14:13:53.326: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.954397ms) Jun 3 14:13:53.329: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.674625ms) Jun 3 14:13:53.332: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.153173ms) Jun 3 14:13:53.335: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.056885ms) Jun 3 14:13:53.338: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.319702ms) Jun 3 14:13:53.341: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.676921ms) Jun 3 14:13:53.343: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.439901ms) Jun 3 14:13:53.347: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.308165ms) Jun 3 14:13:53.379: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 32.508563ms) Jun 3 14:13:53.383: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.598681ms) Jun 3 14:13:53.386: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.523655ms) Jun 3 14:13:53.389: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.943352ms) Jun 3 14:13:53.393: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.103453ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:13:53.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5373" for this suite. Jun 3 14:13:59.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:13:59.495: INFO: namespace proxy-5373 deletion completed in 6.099440092s • [SLOW TEST:6.234 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:13:59.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8763 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 14:13:59.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 3 14:14:23.684: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.56:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8763 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:14:23.684: INFO: >>> kubeConfig: /root/.kube/config I0603 14:14:23.719894 6 log.go:172] (0xc000d16a50) (0xc0029a15e0) Create stream I0603 14:14:23.719923 6 log.go:172] (0xc000d16a50) (0xc0029a15e0) Stream added, broadcasting: 1 I0603 14:14:23.721988 6 log.go:172] (0xc000d16a50) Reply frame received for 1 I0603 14:14:23.722021 6 log.go:172] (0xc000d16a50) (0xc001c60f00) Create stream I0603 14:14:23.722031 6 log.go:172] (0xc000d16a50) (0xc001c60f00) Stream added, broadcasting: 3 I0603 14:14:23.722802 6 log.go:172] (0xc000d16a50) Reply frame received for 3 I0603 14:14:23.722830 6 log.go:172] (0xc000d16a50) (0xc0029a1720) Create stream I0603 14:14:23.722841 6 log.go:172] (0xc000d16a50) (0xc0029a1720) Stream added, broadcasting: 5 I0603 14:14:23.723651 6 log.go:172] (0xc000d16a50) Reply frame received for 5 I0603 14:14:23.800692 6 log.go:172] (0xc000d16a50) Data frame received for 5 I0603 14:14:23.800730 6 log.go:172] (0xc0029a1720) (5) Data frame handling I0603 14:14:23.800754 6 log.go:172] (0xc000d16a50) Data frame received for 3 I0603 14:14:23.800780 6 log.go:172] (0xc001c60f00) (3) Data frame handling I0603 14:14:23.800800 6 log.go:172] (0xc001c60f00) (3) Data frame sent I0603 14:14:23.800835 6 log.go:172] (0xc000d16a50) Data frame received for 3 I0603 14:14:23.800854 6 log.go:172] (0xc001c60f00) (3) Data frame handling I0603 14:14:23.802874 6 log.go:172] (0xc000d16a50) Data frame received for 1 I0603 14:14:23.802894 6 log.go:172] (0xc0029a15e0) (1) Data frame handling I0603 14:14:23.802903 6 log.go:172] (0xc0029a15e0) (1) Data frame sent I0603 14:14:23.802913 6 log.go:172] (0xc000d16a50) (0xc0029a15e0) Stream removed, broadcasting: 1 I0603 14:14:23.802995 6 log.go:172] (0xc000d16a50) (0xc0029a15e0) Stream removed, broadcasting: 1 I0603 14:14:23.803006 6 log.go:172] (0xc000d16a50) (0xc001c60f00) Stream removed, broadcasting: 3 I0603 14:14:23.803101 6 log.go:172] (0xc000d16a50) Go away received I0603 14:14:23.803134 6 log.go:172] (0xc000d16a50) (0xc0029a1720) Stream removed, broadcasting: 5 Jun 3 14:14:23.803: INFO: Found all expected endpoints: [netserver-0] Jun 3 14:14:23.807: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8763 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:14:23.807: INFO: >>> kubeConfig: /root/.kube/config I0603 14:14:23.843248 6 log.go:172] (0xc001c38c60) (0xc0021486e0) Create stream I0603 14:14:23.843291 6 log.go:172] (0xc001c38c60) (0xc0021486e0) Stream added, broadcasting: 1 I0603 14:14:23.845269 6 log.go:172] (0xc001c38c60) Reply frame received for 1 I0603 14:14:23.845305 6 log.go:172] (0xc001c38c60) (0xc001c60fa0) Create stream I0603 14:14:23.845317 6 log.go:172] (0xc001c38c60) (0xc001c60fa0) Stream added, broadcasting: 3 I0603 14:14:23.846465 6 log.go:172] (0xc001c38c60) Reply frame received for 3 I0603 14:14:23.846515 6 log.go:172] (0xc001c38c60) (0xc0029a1ae0) Create stream I0603 14:14:23.846533 6 log.go:172] (0xc001c38c60) (0xc0029a1ae0) Stream added, broadcasting: 5 I0603 14:14:23.847448 6 log.go:172] (0xc001c38c60) Reply frame received for 5 I0603 14:14:23.905055 6 log.go:172] (0xc001c38c60) Data frame received for 3 I0603 14:14:23.905093 6 log.go:172] (0xc001c60fa0) (3) Data frame handling I0603 14:14:23.905106 6 log.go:172] (0xc001c60fa0) (3) Data frame sent I0603 14:14:23.905309 6 log.go:172] (0xc001c38c60) Data frame received for 3 I0603 14:14:23.905329 6 log.go:172] (0xc001c60fa0) (3) Data frame handling I0603 14:14:23.905453 6 log.go:172] (0xc001c38c60) Data frame received for 5 I0603 14:14:23.905486 6 log.go:172] (0xc0029a1ae0) (5) Data frame handling I0603 14:14:23.906882 6 log.go:172] (0xc001c38c60) Data frame received for 1 I0603 14:14:23.906922 6 log.go:172] (0xc0021486e0) (1) Data frame handling I0603 14:14:23.906950 6 log.go:172] (0xc0021486e0) (1) Data frame sent I0603 14:14:23.906969 6 log.go:172] (0xc001c38c60) (0xc0021486e0) Stream removed, broadcasting: 1 I0603 14:14:23.906990 6 log.go:172] (0xc001c38c60) Go away received I0603 14:14:23.907127 6 log.go:172] (0xc001c38c60) (0xc0021486e0) Stream removed, broadcasting: 1 I0603 14:14:23.907153 6 log.go:172] (0xc001c38c60) (0xc001c60fa0) Stream removed, broadcasting: 3 I0603 14:14:23.907163 6 log.go:172] (0xc001c38c60) (0xc0029a1ae0) Stream removed, broadcasting: 5 Jun 3 14:14:23.907: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:14:23.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8763" for this suite. Jun 3 14:14:47.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:14:48.010: INFO: namespace pod-network-test-8763 deletion completed in 24.09833111s • [SLOW TEST:48.514 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:14:48.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9444 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 3 14:14:48.089: INFO: Found 0 stateful pods, waiting for 3 Jun 3 14:14:58.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:14:58.094: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:14:58.094: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 14:15:08.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:15:08.094: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:15:08.094: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 3 14:15:08.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:15:10.883: INFO: stderr: "I0603 14:15:10.749589 2941 log.go:172] (0xc000b8e420) (0xc000b8a780) Create stream\nI0603 14:15:10.749644 2941 log.go:172] (0xc000b8e420) (0xc000b8a780) Stream added, broadcasting: 1\nI0603 14:15:10.752655 2941 log.go:172] (0xc000b8e420) Reply frame received for 1\nI0603 14:15:10.752708 2941 log.go:172] (0xc000b8e420) (0xc0007d8000) Create stream\nI0603 14:15:10.752729 2941 log.go:172] (0xc000b8e420) (0xc0007d8000) Stream added, broadcasting: 3\nI0603 14:15:10.753869 2941 log.go:172] (0xc000b8e420) Reply frame received for 3\nI0603 14:15:10.753914 2941 log.go:172] (0xc000b8e420) (0xc0007d80a0) Create stream\nI0603 14:15:10.753926 2941 log.go:172] (0xc000b8e420) (0xc0007d80a0) Stream added, broadcasting: 5\nI0603 14:15:10.754946 2941 log.go:172] (0xc000b8e420) Reply frame received for 5\nI0603 14:15:10.842662 2941 log.go:172] (0xc000b8e420) Data frame received for 5\nI0603 14:15:10.842700 2941 log.go:172] (0xc0007d80a0) (5) Data frame handling\nI0603 14:15:10.842722 2941 log.go:172] (0xc0007d80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:15:10.873959 2941 log.go:172] (0xc000b8e420) Data frame received for 3\nI0603 14:15:10.874005 2941 log.go:172] (0xc0007d8000) (3) Data frame handling\nI0603 14:15:10.874026 2941 log.go:172] (0xc0007d8000) (3) Data frame sent\nI0603 14:15:10.874046 2941 log.go:172] (0xc000b8e420) Data frame received for 3\nI0603 14:15:10.874080 2941 log.go:172] (0xc0007d8000) (3) Data frame handling\nI0603 14:15:10.874246 2941 log.go:172] (0xc000b8e420) Data frame received for 5\nI0603 14:15:10.874286 2941 log.go:172] (0xc0007d80a0) (5) Data frame handling\nI0603 14:15:10.875384 2941 log.go:172] (0xc000b8e420) Data frame received for 1\nI0603 14:15:10.875434 2941 log.go:172] (0xc000b8a780) (1) Data frame handling\nI0603 14:15:10.875461 2941 log.go:172] (0xc000b8a780) (1) Data frame sent\nI0603 14:15:10.875536 2941 log.go:172] (0xc000b8e420) (0xc000b8a780) Stream removed, broadcasting: 1\nI0603 14:15:10.875570 2941 log.go:172] (0xc000b8e420) Go away received\nI0603 14:15:10.876182 2941 log.go:172] (0xc000b8e420) (0xc000b8a780) Stream removed, broadcasting: 1\nI0603 14:15:10.876204 2941 log.go:172] (0xc000b8e420) (0xc0007d8000) Stream removed, broadcasting: 3\nI0603 14:15:10.876218 2941 log.go:172] (0xc000b8e420) (0xc0007d80a0) Stream removed, broadcasting: 5\n" Jun 3 14:15:10.883: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:15:10.884: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 3 14:15:20.917: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 3 14:15:30.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:15:31.175: INFO: stderr: "I0603 14:15:31.077448 2974 log.go:172] (0xc00057a420) (0xc0007b08c0) Create stream\nI0603 14:15:31.077506 2974 log.go:172] (0xc00057a420) (0xc0007b08c0) Stream added, broadcasting: 1\nI0603 14:15:31.080182 2974 log.go:172] (0xc00057a420) Reply frame received for 1\nI0603 14:15:31.080226 2974 log.go:172] (0xc00057a420) (0xc0007b01e0) Create stream\nI0603 14:15:31.080242 2974 log.go:172] (0xc00057a420) (0xc0007b01e0) Stream added, broadcasting: 3\nI0603 14:15:31.081055 2974 log.go:172] (0xc00057a420) Reply frame received for 3\nI0603 14:15:31.081093 2974 log.go:172] (0xc00057a420) (0xc0008ba000) Create stream\nI0603 14:15:31.081103 2974 log.go:172] (0xc00057a420) (0xc0008ba000) Stream added, broadcasting: 5\nI0603 14:15:31.082111 2974 log.go:172] (0xc00057a420) Reply frame received for 5\nI0603 14:15:31.169302 2974 log.go:172] (0xc00057a420) Data frame received for 3\nI0603 14:15:31.169362 2974 log.go:172] (0xc0007b01e0) (3) Data frame handling\nI0603 14:15:31.169401 2974 log.go:172] (0xc0007b01e0) (3) Data frame sent\nI0603 14:15:31.169420 2974 log.go:172] (0xc00057a420) Data frame received for 3\nI0603 14:15:31.169436 2974 log.go:172] (0xc0007b01e0) (3) Data frame handling\nI0603 14:15:31.169455 2974 log.go:172] (0xc00057a420) Data frame received for 5\nI0603 14:15:31.169469 2974 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0603 14:15:31.169483 2974 log.go:172] (0xc0008ba000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:15:31.169665 2974 log.go:172] (0xc00057a420) Data frame received for 5\nI0603 14:15:31.169684 2974 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0603 14:15:31.171248 2974 log.go:172] (0xc00057a420) Data frame received for 1\nI0603 14:15:31.171278 2974 log.go:172] (0xc0007b08c0) (1) Data frame handling\nI0603 14:15:31.171307 2974 log.go:172] (0xc0007b08c0) (1) Data frame sent\nI0603 14:15:31.171350 2974 log.go:172] (0xc00057a420) (0xc0007b08c0) Stream removed, broadcasting: 1\nI0603 14:15:31.171450 2974 log.go:172] (0xc00057a420) Go away received\nI0603 14:15:31.171702 2974 log.go:172] (0xc00057a420) (0xc0007b08c0) Stream removed, broadcasting: 1\nI0603 14:15:31.171721 2974 log.go:172] (0xc00057a420) (0xc0007b01e0) Stream removed, broadcasting: 3\nI0603 14:15:31.171730 2974 log.go:172] (0xc00057a420) (0xc0008ba000) Stream removed, broadcasting: 5\n" Jun 3 14:15:31.175: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:15:31.175: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:15:41.219: INFO: Waiting for StatefulSet statefulset-9444/ss2 to complete update Jun 3 14:15:41.219: INFO: Waiting for Pod statefulset-9444/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 14:15:41.219: INFO: Waiting for Pod statefulset-9444/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 14:15:41.219: INFO: Waiting for Pod statefulset-9444/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 14:15:51.227: INFO: Waiting for StatefulSet statefulset-9444/ss2 to complete update Jun 3 14:15:51.227: INFO: Waiting for Pod statefulset-9444/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 14:15:51.227: INFO: Waiting for Pod statefulset-9444/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 3 14:16:01.227: INFO: Waiting for StatefulSet statefulset-9444/ss2 to complete update Jun 3 14:16:01.227: INFO: Waiting for Pod statefulset-9444/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 3 14:16:11.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 3 14:16:11.495: INFO: stderr: "I0603 14:16:11.346210 2994 log.go:172] (0xc0009ac370) (0xc0002e66e0) Create stream\nI0603 14:16:11.346257 2994 log.go:172] (0xc0009ac370) (0xc0002e66e0) Stream added, broadcasting: 1\nI0603 14:16:11.348126 2994 log.go:172] (0xc0009ac370) Reply frame received for 1\nI0603 14:16:11.348585 2994 log.go:172] (0xc0009ac370) (0xc0009d0000) Create stream\nI0603 14:16:11.348606 2994 log.go:172] (0xc0009ac370) (0xc0009d0000) Stream added, broadcasting: 3\nI0603 14:16:11.350185 2994 log.go:172] (0xc0009ac370) Reply frame received for 3\nI0603 14:16:11.350241 2994 log.go:172] (0xc0009ac370) (0xc0009d00a0) Create stream\nI0603 14:16:11.350259 2994 log.go:172] (0xc0009ac370) (0xc0009d00a0) Stream added, broadcasting: 5\nI0603 14:16:11.351232 2994 log.go:172] (0xc0009ac370) Reply frame received for 5\nI0603 14:16:11.453984 2994 log.go:172] (0xc0009ac370) Data frame received for 5\nI0603 14:16:11.454018 2994 log.go:172] (0xc0009d00a0) (5) Data frame handling\nI0603 14:16:11.454038 2994 log.go:172] (0xc0009d00a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0603 14:16:11.488258 2994 log.go:172] (0xc0009ac370) Data frame received for 5\nI0603 14:16:11.488294 2994 log.go:172] (0xc0009d00a0) (5) Data frame handling\nI0603 14:16:11.488324 2994 log.go:172] (0xc0009ac370) Data frame received for 3\nI0603 14:16:11.488358 2994 log.go:172] (0xc0009d0000) (3) Data frame handling\nI0603 14:16:11.488386 2994 log.go:172] (0xc0009d0000) (3) Data frame sent\nI0603 14:16:11.488399 2994 log.go:172] (0xc0009ac370) Data frame received for 3\nI0603 14:16:11.488412 2994 log.go:172] (0xc0009d0000) (3) Data frame handling\nI0603 14:16:11.490482 2994 log.go:172] (0xc0009ac370) Data frame received for 1\nI0603 14:16:11.490504 2994 log.go:172] (0xc0002e66e0) (1) Data frame handling\nI0603 14:16:11.490520 2994 log.go:172] (0xc0002e66e0) (1) Data frame sent\nI0603 14:16:11.490537 2994 log.go:172] (0xc0009ac370) (0xc0002e66e0) Stream removed, broadcasting: 1\nI0603 14:16:11.490562 2994 log.go:172] (0xc0009ac370) Go away received\nI0603 14:16:11.490816 2994 log.go:172] (0xc0009ac370) (0xc0002e66e0) Stream removed, broadcasting: 1\nI0603 14:16:11.490830 2994 log.go:172] (0xc0009ac370) (0xc0009d0000) Stream removed, broadcasting: 3\nI0603 14:16:11.490836 2994 log.go:172] (0xc0009ac370) (0xc0009d00a0) Stream removed, broadcasting: 5\n" Jun 3 14:16:11.496: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 3 14:16:11.496: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 3 14:16:21.548: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 3 14:16:31.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 3 14:16:31.850: INFO: stderr: "I0603 14:16:31.742082 3016 log.go:172] (0xc0008bc420) (0xc000422820) Create stream\nI0603 14:16:31.742138 3016 log.go:172] (0xc0008bc420) (0xc000422820) Stream added, broadcasting: 1\nI0603 14:16:31.744848 3016 log.go:172] (0xc0008bc420) Reply frame received for 1\nI0603 14:16:31.744899 3016 log.go:172] (0xc0008bc420) (0xc000992000) Create stream\nI0603 14:16:31.744911 3016 log.go:172] (0xc0008bc420) (0xc000992000) Stream added, broadcasting: 3\nI0603 14:16:31.746125 3016 log.go:172] (0xc0008bc420) Reply frame received for 3\nI0603 14:16:31.746727 3016 log.go:172] (0xc0008bc420) (0xc000842000) Create stream\nI0603 14:16:31.746767 3016 log.go:172] (0xc0008bc420) (0xc000842000) Stream added, broadcasting: 5\nI0603 14:16:31.748512 3016 log.go:172] (0xc0008bc420) Reply frame received for 5\nI0603 14:16:31.841848 3016 log.go:172] (0xc0008bc420) Data frame received for 3\nI0603 14:16:31.841887 3016 log.go:172] (0xc000992000) (3) Data frame handling\nI0603 14:16:31.841902 3016 log.go:172] (0xc000992000) (3) Data frame sent\nI0603 14:16:31.841913 3016 log.go:172] (0xc0008bc420) Data frame received for 3\nI0603 14:16:31.841921 3016 log.go:172] (0xc000992000) (3) Data frame handling\nI0603 14:16:31.841976 3016 log.go:172] (0xc0008bc420) Data frame received for 5\nI0603 14:16:31.841997 3016 log.go:172] (0xc000842000) (5) Data frame handling\nI0603 14:16:31.842014 3016 log.go:172] (0xc000842000) (5) Data frame sent\nI0603 14:16:31.842024 3016 log.go:172] (0xc0008bc420) Data frame received for 5\nI0603 14:16:31.842033 3016 log.go:172] (0xc000842000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0603 14:16:31.843945 3016 log.go:172] (0xc0008bc420) Data frame received for 1\nI0603 14:16:31.843977 3016 log.go:172] (0xc000422820) (1) Data frame handling\nI0603 14:16:31.843998 3016 log.go:172] (0xc000422820) (1) Data frame sent\nI0603 14:16:31.844013 3016 log.go:172] (0xc0008bc420) (0xc000422820) Stream removed, broadcasting: 1\nI0603 14:16:31.844029 3016 log.go:172] (0xc0008bc420) Go away received\nI0603 14:16:31.844483 3016 log.go:172] (0xc0008bc420) (0xc000422820) Stream removed, broadcasting: 1\nI0603 14:16:31.844517 3016 log.go:172] (0xc0008bc420) (0xc000992000) Stream removed, broadcasting: 3\nI0603 14:16:31.844531 3016 log.go:172] (0xc0008bc420) (0xc000842000) Stream removed, broadcasting: 5\n" Jun 3 14:16:31.851: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 3 14:16:31.851: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 3 14:16:41.896: INFO: Waiting for StatefulSet statefulset-9444/ss2 to complete update Jun 3 14:16:41.896: INFO: Waiting for Pod statefulset-9444/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 3 14:16:41.896: INFO: Waiting for Pod statefulset-9444/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 3 14:16:51.905: INFO: Waiting for StatefulSet statefulset-9444/ss2 to complete update Jun 3 14:16:51.905: INFO: Waiting for Pod statefulset-9444/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 3 14:17:01.905: INFO: Deleting all statefulset in ns statefulset-9444 Jun 3 14:17:01.908: INFO: Scaling statefulset ss2 to 0 Jun 3 14:17:21.930: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 14:17:21.933: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:17:21.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9444" for this suite. Jun 3 14:17:27.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:17:28.077: INFO: namespace statefulset-9444 deletion completed in 6.119393146s • [SLOW TEST:160.067 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:17:28.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ad0f6e87-00ca-43a5-9e69-7b80d5a7064b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ad0f6e87-00ca-43a5-9e69-7b80d5a7064b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:17:36.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3754" for this suite. Jun 3 14:17:58.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:17:58.286: INFO: namespace projected-3754 deletion completed in 22.096025046s • [SLOW TEST:30.209 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:17:58.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 3 14:18:02.921: INFO: Successfully updated pod "pod-update-ee53cc30-9d93-417c-a587-be81f5dc9de7" STEP: verifying the updated pod is in kubernetes Jun 3 14:18:02.930: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:18:02.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-742" for this suite. Jun 3 14:18:24.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:18:25.066: INFO: namespace pods-742 deletion completed in 22.132816661s • [SLOW TEST:26.779 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:18:25.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-4b0418dc-044e-470f-9319-f313a9884160 STEP: Creating configMap with name cm-test-opt-upd-74486504-433c-4f1c-ac61-65cd8c77b775 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4b0418dc-044e-470f-9319-f313a9884160 STEP: Updating configmap cm-test-opt-upd-74486504-433c-4f1c-ac61-65cd8c77b775 STEP: Creating configMap with name cm-test-opt-create-83ce5faf-b12a-41ba-8ec6-f462c1292715 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:18:33.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2588" for this suite. Jun 3 14:18:55.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:18:55.433: INFO: namespace projected-2588 deletion completed in 22.099366666s • [SLOW TEST:30.367 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:18:55.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 3 14:18:55.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3025' Jun 3 14:18:55.836: INFO: stderr: "" Jun 3 14:18:55.836: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 14:18:55.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3025' Jun 3 14:18:55.965: INFO: stderr: "" Jun 3 14:18:55.965: INFO: stdout: "update-demo-nautilus-44dj5 update-demo-nautilus-sgpx5 " Jun 3 14:18:55.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44dj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3025' Jun 3 14:18:56.047: INFO: stderr: "" Jun 3 14:18:56.047: INFO: stdout: "" Jun 3 14:18:56.047: INFO: update-demo-nautilus-44dj5 is created but not running Jun 3 14:19:01.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3025' Jun 3 14:19:01.145: INFO: stderr: "" Jun 3 14:19:01.145: INFO: stdout: "update-demo-nautilus-44dj5 update-demo-nautilus-sgpx5 " Jun 3 14:19:01.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44dj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3025' Jun 3 14:19:01.239: INFO: stderr: "" Jun 3 14:19:01.239: INFO: stdout: "true" Jun 3 14:19:01.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44dj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3025' Jun 3 14:19:01.328: INFO: stderr: "" Jun 3 14:19:01.328: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 14:19:01.328: INFO: validating pod update-demo-nautilus-44dj5 Jun 3 14:19:01.332: INFO: got data: { "image": "nautilus.jpg" } Jun 3 14:19:01.332: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 14:19:01.332: INFO: update-demo-nautilus-44dj5 is verified up and running Jun 3 14:19:01.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgpx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3025' Jun 3 14:19:01.419: INFO: stderr: "" Jun 3 14:19:01.419: INFO: stdout: "true" Jun 3 14:19:01.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgpx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3025' Jun 3 14:19:01.500: INFO: stderr: "" Jun 3 14:19:01.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 3 14:19:01.500: INFO: validating pod update-demo-nautilus-sgpx5 Jun 3 14:19:01.527: INFO: got data: { "image": "nautilus.jpg" } Jun 3 14:19:01.527: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 14:19:01.527: INFO: update-demo-nautilus-sgpx5 is verified up and running STEP: using delete to clean up resources Jun 3 14:19:01.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3025' Jun 3 14:19:01.619: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 14:19:01.619: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 14:19:01.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3025' Jun 3 14:19:01.718: INFO: stderr: "No resources found.\n" Jun 3 14:19:01.718: INFO: stdout: "" Jun 3 14:19:01.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3025 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 14:19:01.810: INFO: stderr: "" Jun 3 14:19:01.811: INFO: stdout: "update-demo-nautilus-44dj5\nupdate-demo-nautilus-sgpx5\n" Jun 3 14:19:02.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3025' Jun 3 14:19:02.420: INFO: stderr: "No resources found.\n" Jun 3 14:19:02.421: INFO: stdout: "" Jun 3 14:19:02.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3025 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 14:19:02.517: INFO: stderr: "" Jun 3 14:19:02.518: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:19:02.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3025" for this suite. Jun 3 14:19:24.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:19:24.800: INFO: namespace kubectl-3025 deletion completed in 22.278991251s • [SLOW TEST:29.367 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:19:24.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 3 14:19:28.915: INFO: Pod pod-hostip-4c590c2d-6d12-4526-a7f4-2c9c61f7a9e3 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:19:28.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7220" for this suite. Jun 3 14:19:50.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:19:51.015: INFO: namespace pods-7220 deletion completed in 22.096689691s • [SLOW TEST:26.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:19:51.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f7227ce8-ca39-423d-aa77-b99dd4416542 STEP: Creating a pod to test consume secrets Jun 3 14:19:51.113: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1" in namespace "projected-6977" to be "success or failure" Jun 3 14:19:51.149: INFO: Pod "pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.993492ms Jun 3 14:19:53.154: INFO: Pod "pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040185468s Jun 3 14:19:55.158: INFO: Pod "pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044470868s STEP: Saw pod success Jun 3 14:19:55.158: INFO: Pod "pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1" satisfied condition "success or failure" Jun 3 14:19:55.162: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1 container projected-secret-volume-test: STEP: delete the pod Jun 3 14:19:55.200: INFO: Waiting for pod pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1 to disappear Jun 3 14:19:55.209: INFO: Pod pod-projected-secrets-60d2cb33-4705-4279-9119-cc9095d4c2a1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:19:55.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6977" for this suite. Jun 3 14:20:01.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:20:01.323: INFO: namespace projected-6977 deletion completed in 6.109562681s • [SLOW TEST:10.307 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:20:01.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 3 14:20:01.372: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 14:20:01.386: INFO: Waiting for terminating namespaces to be deleted... Jun 3 14:20:01.388: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 3 14:20:01.394: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.394: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 14:20:01.394: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.394: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 14:20:01.394: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 3 14:20:01.400: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.400: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 14:20:01.401: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.401: INFO: Container kindnet-cni ready: true, restart count 2 Jun 3 14:20:01.401: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.401: INFO: Container coredns ready: true, restart count 0 Jun 3 14:20:01.401: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 3 14:20:01.401: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-947b1c7e-e976-4d36-98d0-d81824e24003 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-947b1c7e-e976-4d36-98d0-d81824e24003 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-947b1c7e-e976-4d36-98d0-d81824e24003 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:20:09.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6549" for this suite. Jun 3 14:20:17.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:20:17.652: INFO: namespace sched-pred-6549 deletion completed in 8.094016925s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:16.330 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:20:17.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:20:21.780: INFO: Waiting up to 5m0s for pod "client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0" in namespace "pods-6849" to be "success or failure" Jun 3 14:20:21.796: INFO: Pod "client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.822251ms Jun 3 14:20:23.801: INFO: Pod "client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020600506s Jun 3 14:20:25.805: INFO: Pod "client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024800607s STEP: Saw pod success Jun 3 14:20:25.805: INFO: Pod "client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0" satisfied condition "success or failure" Jun 3 14:20:25.808: INFO: Trying to get logs from node iruya-worker pod client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0 container env3cont: STEP: delete the pod Jun 3 14:20:25.829: INFO: Waiting for pod client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0 to disappear Jun 3 14:20:25.874: INFO: Pod client-envvars-142d4fb1-3586-4c7a-8251-52ab9a0a3eb0 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:20:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6849" for this suite. Jun 3 14:21:15.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:21:15.970: INFO: namespace pods-6849 deletion completed in 50.091450937s • [SLOW TEST:58.317 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:21:15.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a4dcbbba-c81a-4792-9c4a-7dcfa9f78310 STEP: Creating a pod to test consume configMaps Jun 3 14:21:16.058: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11" in namespace "projected-921" to be "success or failure" Jun 3 14:21:16.062: INFO: Pod "pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.979327ms Jun 3 14:21:18.065: INFO: Pod "pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007389407s Jun 3 14:21:20.070: INFO: Pod "pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012208133s STEP: Saw pod success Jun 3 14:21:20.070: INFO: Pod "pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11" satisfied condition "success or failure" Jun 3 14:21:20.074: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11 container projected-configmap-volume-test: STEP: delete the pod Jun 3 14:21:20.094: INFO: Waiting for pod pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11 to disappear Jun 3 14:21:20.098: INFO: Pod pod-projected-configmaps-0b5eb036-1120-4c7e-8a95-68748a23ca11 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:21:20.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-921" for this suite. Jun 3 14:21:26.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:21:26.210: INFO: namespace projected-921 deletion completed in 6.108628433s • [SLOW TEST:10.240 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:21:26.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 3 14:21:26.262: INFO: Waiting up to 5m0s for pod "client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac" in namespace "containers-1047" to be "success or failure" Jun 3 14:21:26.277: INFO: Pod "client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.537403ms Jun 3 14:21:28.281: INFO: Pod "client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019673466s Jun 3 14:21:30.385: INFO: Pod "client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123579542s STEP: Saw pod success Jun 3 14:21:30.385: INFO: Pod "client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac" satisfied condition "success or failure" Jun 3 14:21:30.388: INFO: Trying to get logs from node iruya-worker pod client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac container test-container: STEP: delete the pod Jun 3 14:21:30.519: INFO: Waiting for pod client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac to disappear Jun 3 14:21:30.523: INFO: Pod client-containers-b100bf94-5ef0-4c3c-a6af-26143eae9eac no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:21:30.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1047" for this suite. Jun 3 14:21:36.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:21:36.635: INFO: namespace containers-1047 deletion completed in 6.10993303s • [SLOW TEST:10.425 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:21:36.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:21:36.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c" in namespace "projected-2866" to be "success or failure" Jun 3 14:21:36.702: INFO: Pod "downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.557425ms Jun 3 14:21:38.707: INFO: Pod "downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008150228s Jun 3 14:21:40.711: INFO: Pod "downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012294414s STEP: Saw pod success Jun 3 14:21:40.711: INFO: Pod "downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c" satisfied condition "success or failure" Jun 3 14:21:40.714: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c container client-container: STEP: delete the pod Jun 3 14:21:40.746: INFO: Waiting for pod downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c to disappear Jun 3 14:21:40.751: INFO: Pod downwardapi-volume-447a533d-335a-4f76-825f-5d4a0664841c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:21:40.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2866" for this suite. Jun 3 14:21:46.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:21:46.848: INFO: namespace projected-2866 deletion completed in 6.093414043s • [SLOW TEST:10.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:21:46.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 14:21:50.971: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:21:51.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1955" for this suite. Jun 3 14:21:57.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:21:57.107: INFO: namespace container-runtime-1955 deletion completed in 6.088128097s • [SLOW TEST:10.258 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:21:57.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-42729481-6e12-492e-9153-5a30335c903a [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:21:57.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8909" for this suite. Jun 3 14:22:03.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:22:03.271: INFO: namespace secrets-8909 deletion completed in 6.124215233s • [SLOW TEST:6.164 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:22:03.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:22:03.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899" in namespace "projected-2942" to be "success or failure" Jun 3 14:22:03.366: INFO: Pod "downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899": Phase="Pending", Reason="", readiness=false. Elapsed: 44.262111ms Jun 3 14:22:05.370: INFO: Pod "downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04805958s Jun 3 14:22:07.373: INFO: Pod "downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051523472s STEP: Saw pod success Jun 3 14:22:07.373: INFO: Pod "downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899" satisfied condition "success or failure" Jun 3 14:22:07.375: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899 container client-container: STEP: delete the pod Jun 3 14:22:07.411: INFO: Waiting for pod downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899 to disappear Jun 3 14:22:07.415: INFO: Pod downwardapi-volume-608d896b-10d1-40a2-a890-32f31a276899 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:22:07.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2942" for this suite. Jun 3 14:22:13.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:22:13.505: INFO: namespace projected-2942 deletion completed in 6.086757267s • [SLOW TEST:10.234 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:22:13.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 3 14:22:25.826: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:25.826: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:25.872162 6 log.go:172] (0xc003332a50) (0xc002bf3ae0) Create stream I0603 14:22:25.872190 6 log.go:172] (0xc003332a50) (0xc002bf3ae0) Stream added, broadcasting: 1 I0603 14:22:25.874201 6 log.go:172] (0xc003332a50) Reply frame received for 1 I0603 14:22:25.874240 6 log.go:172] (0xc003332a50) (0xc002276fa0) Create stream I0603 14:22:25.874256 6 log.go:172] (0xc003332a50) (0xc002276fa0) Stream added, broadcasting: 3 I0603 14:22:25.875139 6 log.go:172] (0xc003332a50) Reply frame received for 3 I0603 14:22:25.875188 6 log.go:172] (0xc003332a50) (0xc002b78000) Create stream I0603 14:22:25.875208 6 log.go:172] (0xc003332a50) (0xc002b78000) Stream added, broadcasting: 5 I0603 14:22:25.876036 6 log.go:172] (0xc003332a50) Reply frame received for 5 I0603 14:22:25.962705 6 log.go:172] (0xc003332a50) Data frame received for 5 I0603 14:22:25.962752 6 log.go:172] (0xc002b78000) (5) Data frame handling I0603 14:22:25.962802 6 log.go:172] (0xc003332a50) Data frame received for 3 I0603 14:22:25.962830 6 log.go:172] (0xc002276fa0) (3) Data frame handling I0603 14:22:25.962854 6 log.go:172] (0xc002276fa0) (3) Data frame sent I0603 14:22:25.962873 6 log.go:172] (0xc003332a50) Data frame received for 3 I0603 14:22:25.962890 6 log.go:172] (0xc002276fa0) (3) Data frame handling I0603 14:22:25.964565 6 log.go:172] (0xc003332a50) Data frame received for 1 I0603 14:22:25.964601 6 log.go:172] (0xc002bf3ae0) (1) Data frame handling I0603 14:22:25.964632 6 log.go:172] (0xc002bf3ae0) (1) Data frame sent I0603 14:22:25.964740 6 log.go:172] (0xc003332a50) (0xc002bf3ae0) Stream removed, broadcasting: 1 I0603 14:22:25.964807 6 log.go:172] (0xc003332a50) Go away received I0603 14:22:25.964851 6 log.go:172] (0xc003332a50) (0xc002bf3ae0) Stream removed, broadcasting: 1 I0603 14:22:25.964884 6 log.go:172] (0xc003332a50) (0xc002276fa0) Stream removed, broadcasting: 3 I0603 14:22:25.964903 6 log.go:172] (0xc003332a50) (0xc002b78000) Stream removed, broadcasting: 5 Jun 3 14:22:25.964: INFO: Exec stderr: "" Jun 3 14:22:25.964: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:25.965: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:25.996678 6 log.go:172] (0xc000aed6b0) (0xc002277360) Create stream I0603 14:22:25.996713 6 log.go:172] (0xc000aed6b0) (0xc002277360) Stream added, broadcasting: 1 I0603 14:22:25.999491 6 log.go:172] (0xc000aed6b0) Reply frame received for 1 I0603 14:22:25.999518 6 log.go:172] (0xc000aed6b0) (0xc002277400) Create stream I0603 14:22:25.999533 6 log.go:172] (0xc000aed6b0) (0xc002277400) Stream added, broadcasting: 3 I0603 14:22:26.000545 6 log.go:172] (0xc000aed6b0) Reply frame received for 3 I0603 14:22:26.000580 6 log.go:172] (0xc000aed6b0) (0xc0017bf5e0) Create stream I0603 14:22:26.000593 6 log.go:172] (0xc000aed6b0) (0xc0017bf5e0) Stream added, broadcasting: 5 I0603 14:22:26.002091 6 log.go:172] (0xc000aed6b0) Reply frame received for 5 I0603 14:22:26.082059 6 log.go:172] (0xc000aed6b0) Data frame received for 3 I0603 14:22:26.082090 6 log.go:172] (0xc002277400) (3) Data frame handling I0603 14:22:26.082103 6 log.go:172] (0xc002277400) (3) Data frame sent I0603 14:22:26.082109 6 log.go:172] (0xc000aed6b0) Data frame received for 3 I0603 14:22:26.082112 6 log.go:172] (0xc002277400) (3) Data frame handling I0603 14:22:26.082130 6 log.go:172] (0xc000aed6b0) Data frame received for 5 I0603 14:22:26.082138 6 log.go:172] (0xc0017bf5e0) (5) Data frame handling I0603 14:22:26.084290 6 log.go:172] (0xc000aed6b0) Data frame received for 1 I0603 14:22:26.084306 6 log.go:172] (0xc002277360) (1) Data frame handling I0603 14:22:26.084317 6 log.go:172] (0xc002277360) (1) Data frame sent I0603 14:22:26.084333 6 log.go:172] (0xc000aed6b0) (0xc002277360) Stream removed, broadcasting: 1 I0603 14:22:26.084428 6 log.go:172] (0xc000aed6b0) Go away received I0603 14:22:26.084484 6 log.go:172] (0xc000aed6b0) (0xc002277360) Stream removed, broadcasting: 1 I0603 14:22:26.084503 6 log.go:172] (0xc000aed6b0) (0xc002277400) Stream removed, broadcasting: 3 I0603 14:22:26.084514 6 log.go:172] (0xc000aed6b0) (0xc0017bf5e0) Stream removed, broadcasting: 5 Jun 3 14:22:26.084: INFO: Exec stderr: "" Jun 3 14:22:26.084: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.084: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.118309 6 log.go:172] (0xc002551290) (0xc002b781e0) Create stream I0603 14:22:26.118336 6 log.go:172] (0xc002551290) (0xc002b781e0) Stream added, broadcasting: 1 I0603 14:22:26.120398 6 log.go:172] (0xc002551290) Reply frame received for 1 I0603 14:22:26.120432 6 log.go:172] (0xc002551290) (0xc002b78280) Create stream I0603 14:22:26.120445 6 log.go:172] (0xc002551290) (0xc002b78280) Stream added, broadcasting: 3 I0603 14:22:26.121425 6 log.go:172] (0xc002551290) Reply frame received for 3 I0603 14:22:26.121452 6 log.go:172] (0xc002551290) (0xc002b78320) Create stream I0603 14:22:26.121460 6 log.go:172] (0xc002551290) (0xc002b78320) Stream added, broadcasting: 5 I0603 14:22:26.122274 6 log.go:172] (0xc002551290) Reply frame received for 5 I0603 14:22:26.175573 6 log.go:172] (0xc002551290) Data frame received for 5 I0603 14:22:26.175615 6 log.go:172] (0xc002b78320) (5) Data frame handling I0603 14:22:26.175645 6 log.go:172] (0xc002551290) Data frame received for 3 I0603 14:22:26.175668 6 log.go:172] (0xc002b78280) (3) Data frame handling I0603 14:22:26.175683 6 log.go:172] (0xc002b78280) (3) Data frame sent I0603 14:22:26.175690 6 log.go:172] (0xc002551290) Data frame received for 3 I0603 14:22:26.175703 6 log.go:172] (0xc002b78280) (3) Data frame handling I0603 14:22:26.177986 6 log.go:172] (0xc002551290) Data frame received for 1 I0603 14:22:26.178011 6 log.go:172] (0xc002b781e0) (1) Data frame handling I0603 14:22:26.178054 6 log.go:172] (0xc002b781e0) (1) Data frame sent I0603 14:22:26.178167 6 log.go:172] (0xc002551290) (0xc002b781e0) Stream removed, broadcasting: 1 I0603 14:22:26.178259 6 log.go:172] (0xc002551290) Go away received I0603 14:22:26.178297 6 log.go:172] (0xc002551290) (0xc002b781e0) Stream removed, broadcasting: 1 I0603 14:22:26.178360 6 log.go:172] (0xc002551290) (0xc002b78280) Stream removed, broadcasting: 3 I0603 14:22:26.178392 6 log.go:172] (0xc002551290) (0xc002b78320) Stream removed, broadcasting: 5 Jun 3 14:22:26.178: INFO: Exec stderr: "" Jun 3 14:22:26.178: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.178: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.236060 6 log.go:172] (0xc001fecfd0) (0xc001277b80) Create stream I0603 14:22:26.236092 6 log.go:172] (0xc001fecfd0) (0xc001277b80) Stream added, broadcasting: 1 I0603 14:22:26.238592 6 log.go:172] (0xc001fecfd0) Reply frame received for 1 I0603 14:22:26.238629 6 log.go:172] (0xc001fecfd0) (0xc0017bf900) Create stream I0603 14:22:26.238641 6 log.go:172] (0xc001fecfd0) (0xc0017bf900) Stream added, broadcasting: 3 I0603 14:22:26.239660 6 log.go:172] (0xc001fecfd0) Reply frame received for 3 I0603 14:22:26.239703 6 log.go:172] (0xc001fecfd0) (0xc0017bfc20) Create stream I0603 14:22:26.239713 6 log.go:172] (0xc001fecfd0) (0xc0017bfc20) Stream added, broadcasting: 5 I0603 14:22:26.240655 6 log.go:172] (0xc001fecfd0) Reply frame received for 5 I0603 14:22:26.294637 6 log.go:172] (0xc001fecfd0) Data frame received for 5 I0603 14:22:26.294688 6 log.go:172] (0xc0017bfc20) (5) Data frame handling I0603 14:22:26.294735 6 log.go:172] (0xc001fecfd0) Data frame received for 3 I0603 14:22:26.294766 6 log.go:172] (0xc0017bf900) (3) Data frame handling I0603 14:22:26.294794 6 log.go:172] (0xc0017bf900) (3) Data frame sent I0603 14:22:26.294816 6 log.go:172] (0xc001fecfd0) Data frame received for 3 I0603 14:22:26.294828 6 log.go:172] (0xc0017bf900) (3) Data frame handling I0603 14:22:26.296275 6 log.go:172] (0xc001fecfd0) Data frame received for 1 I0603 14:22:26.296316 6 log.go:172] (0xc001277b80) (1) Data frame handling I0603 14:22:26.296362 6 log.go:172] (0xc001277b80) (1) Data frame sent I0603 14:22:26.296397 6 log.go:172] (0xc001fecfd0) (0xc001277b80) Stream removed, broadcasting: 1 I0603 14:22:26.296425 6 log.go:172] (0xc001fecfd0) Go away received I0603 14:22:26.296549 6 log.go:172] (0xc001fecfd0) (0xc001277b80) Stream removed, broadcasting: 1 I0603 14:22:26.296596 6 log.go:172] (0xc001fecfd0) (0xc0017bf900) Stream removed, broadcasting: 3 I0603 14:22:26.296608 6 log.go:172] (0xc001fecfd0) (0xc0017bfc20) Stream removed, broadcasting: 5 Jun 3 14:22:26.296: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 3 14:22:26.296: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.296: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.333776 6 log.go:172] (0xc003548630) (0xc002277900) Create stream I0603 14:22:26.333807 6 log.go:172] (0xc003548630) (0xc002277900) Stream added, broadcasting: 1 I0603 14:22:26.336108 6 log.go:172] (0xc003548630) Reply frame received for 1 I0603 14:22:26.336134 6 log.go:172] (0xc003548630) (0xc0017bfd60) Create stream I0603 14:22:26.336147 6 log.go:172] (0xc003548630) (0xc0017bfd60) Stream added, broadcasting: 3 I0603 14:22:26.337240 6 log.go:172] (0xc003548630) Reply frame received for 3 I0603 14:22:26.337316 6 log.go:172] (0xc003548630) (0xc0022779a0) Create stream I0603 14:22:26.337328 6 log.go:172] (0xc003548630) (0xc0022779a0) Stream added, broadcasting: 5 I0603 14:22:26.338393 6 log.go:172] (0xc003548630) Reply frame received for 5 I0603 14:22:26.426459 6 log.go:172] (0xc003548630) Data frame received for 5 I0603 14:22:26.426491 6 log.go:172] (0xc0022779a0) (5) Data frame handling I0603 14:22:26.426509 6 log.go:172] (0xc003548630) Data frame received for 3 I0603 14:22:26.426515 6 log.go:172] (0xc0017bfd60) (3) Data frame handling I0603 14:22:26.426525 6 log.go:172] (0xc0017bfd60) (3) Data frame sent I0603 14:22:26.426532 6 log.go:172] (0xc003548630) Data frame received for 3 I0603 14:22:26.426539 6 log.go:172] (0xc0017bfd60) (3) Data frame handling I0603 14:22:26.427783 6 log.go:172] (0xc003548630) Data frame received for 1 I0603 14:22:26.427801 6 log.go:172] (0xc002277900) (1) Data frame handling I0603 14:22:26.427814 6 log.go:172] (0xc002277900) (1) Data frame sent I0603 14:22:26.427827 6 log.go:172] (0xc003548630) (0xc002277900) Stream removed, broadcasting: 1 I0603 14:22:26.427895 6 log.go:172] (0xc003548630) (0xc002277900) Stream removed, broadcasting: 1 I0603 14:22:26.427903 6 log.go:172] (0xc003548630) (0xc0017bfd60) Stream removed, broadcasting: 3 I0603 14:22:26.427977 6 log.go:172] (0xc003548630) Go away received I0603 14:22:26.428067 6 log.go:172] (0xc003548630) (0xc0022779a0) Stream removed, broadcasting: 5 Jun 3 14:22:26.428: INFO: Exec stderr: "" Jun 3 14:22:26.428: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.428: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.460454 6 log.go:172] (0xc00237dad0) (0xc002062140) Create stream I0603 14:22:26.460483 6 log.go:172] (0xc00237dad0) (0xc002062140) Stream added, broadcasting: 1 I0603 14:22:26.463813 6 log.go:172] (0xc00237dad0) Reply frame received for 1 I0603 14:22:26.463868 6 log.go:172] (0xc00237dad0) (0xc002062280) Create stream I0603 14:22:26.463883 6 log.go:172] (0xc00237dad0) (0xc002062280) Stream added, broadcasting: 3 I0603 14:22:26.464984 6 log.go:172] (0xc00237dad0) Reply frame received for 3 I0603 14:22:26.465020 6 log.go:172] (0xc00237dad0) (0xc002277a40) Create stream I0603 14:22:26.465031 6 log.go:172] (0xc00237dad0) (0xc002277a40) Stream added, broadcasting: 5 I0603 14:22:26.466295 6 log.go:172] (0xc00237dad0) Reply frame received for 5 I0603 14:22:26.540812 6 log.go:172] (0xc00237dad0) Data frame received for 3 I0603 14:22:26.540843 6 log.go:172] (0xc002062280) (3) Data frame handling I0603 14:22:26.540850 6 log.go:172] (0xc002062280) (3) Data frame sent I0603 14:22:26.540855 6 log.go:172] (0xc00237dad0) Data frame received for 3 I0603 14:22:26.540868 6 log.go:172] (0xc002062280) (3) Data frame handling I0603 14:22:26.540895 6 log.go:172] (0xc00237dad0) Data frame received for 5 I0603 14:22:26.540902 6 log.go:172] (0xc002277a40) (5) Data frame handling I0603 14:22:26.542403 6 log.go:172] (0xc00237dad0) Data frame received for 1 I0603 14:22:26.542426 6 log.go:172] (0xc002062140) (1) Data frame handling I0603 14:22:26.542438 6 log.go:172] (0xc002062140) (1) Data frame sent I0603 14:22:26.542449 6 log.go:172] (0xc00237dad0) (0xc002062140) Stream removed, broadcasting: 1 I0603 14:22:26.542502 6 log.go:172] (0xc00237dad0) Go away received I0603 14:22:26.542547 6 log.go:172] (0xc00237dad0) (0xc002062140) Stream removed, broadcasting: 1 I0603 14:22:26.542565 6 log.go:172] (0xc00237dad0) (0xc002062280) Stream removed, broadcasting: 3 I0603 14:22:26.542578 6 log.go:172] (0xc00237dad0) (0xc002277a40) Stream removed, broadcasting: 5 Jun 3 14:22:26.542: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 3 14:22:26.542: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.542: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.573660 6 log.go:172] (0xc003549810) (0xc002277c20) Create stream I0603 14:22:26.573694 6 log.go:172] (0xc003549810) (0xc002277c20) Stream added, broadcasting: 1 I0603 14:22:26.576254 6 log.go:172] (0xc003549810) Reply frame received for 1 I0603 14:22:26.576290 6 log.go:172] (0xc003549810) (0xc002b783c0) Create stream I0603 14:22:26.576304 6 log.go:172] (0xc003549810) (0xc002b783c0) Stream added, broadcasting: 3 I0603 14:22:26.577528 6 log.go:172] (0xc003549810) Reply frame received for 3 I0603 14:22:26.577560 6 log.go:172] (0xc003549810) (0xc002277cc0) Create stream I0603 14:22:26.577572 6 log.go:172] (0xc003549810) (0xc002277cc0) Stream added, broadcasting: 5 I0603 14:22:26.578620 6 log.go:172] (0xc003549810) Reply frame received for 5 I0603 14:22:26.641081 6 log.go:172] (0xc003549810) Data frame received for 3 I0603 14:22:26.641323 6 log.go:172] (0xc002b783c0) (3) Data frame handling I0603 14:22:26.641348 6 log.go:172] (0xc002b783c0) (3) Data frame sent I0603 14:22:26.641363 6 log.go:172] (0xc003549810) Data frame received for 3 I0603 14:22:26.641375 6 log.go:172] (0xc002b783c0) (3) Data frame handling I0603 14:22:26.641404 6 log.go:172] (0xc003549810) Data frame received for 5 I0603 14:22:26.641443 6 log.go:172] (0xc002277cc0) (5) Data frame handling I0603 14:22:26.642966 6 log.go:172] (0xc003549810) Data frame received for 1 I0603 14:22:26.643012 6 log.go:172] (0xc002277c20) (1) Data frame handling I0603 14:22:26.643054 6 log.go:172] (0xc002277c20) (1) Data frame sent I0603 14:22:26.643093 6 log.go:172] (0xc003549810) (0xc002277c20) Stream removed, broadcasting: 1 I0603 14:22:26.643161 6 log.go:172] (0xc003549810) Go away received I0603 14:22:26.643240 6 log.go:172] (0xc003549810) (0xc002277c20) Stream removed, broadcasting: 1 I0603 14:22:26.643275 6 log.go:172] (0xc003549810) (0xc002b783c0) Stream removed, broadcasting: 3 I0603 14:22:26.643292 6 log.go:172] (0xc003549810) (0xc002277cc0) Stream removed, broadcasting: 5 Jun 3 14:22:26.643: INFO: Exec stderr: "" Jun 3 14:22:26.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.643: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.675804 6 log.go:172] (0xc0034c86e0) (0xc002e7e000) Create stream I0603 14:22:26.675836 6 log.go:172] (0xc0034c86e0) (0xc002e7e000) Stream added, broadcasting: 1 I0603 14:22:26.684743 6 log.go:172] (0xc0034c86e0) Reply frame received for 1 I0603 14:22:26.684774 6 log.go:172] (0xc0034c86e0) (0xc001276320) Create stream I0603 14:22:26.684786 6 log.go:172] (0xc0034c86e0) (0xc001276320) Stream added, broadcasting: 3 I0603 14:22:26.686019 6 log.go:172] (0xc0034c86e0) Reply frame received for 3 I0603 14:22:26.686095 6 log.go:172] (0xc0034c86e0) (0xc0012763c0) Create stream I0603 14:22:26.686117 6 log.go:172] (0xc0034c86e0) (0xc0012763c0) Stream added, broadcasting: 5 I0603 14:22:26.686899 6 log.go:172] (0xc0034c86e0) Reply frame received for 5 I0603 14:22:26.758926 6 log.go:172] (0xc0034c86e0) Data frame received for 3 I0603 14:22:26.758964 6 log.go:172] (0xc001276320) (3) Data frame handling I0603 14:22:26.758976 6 log.go:172] (0xc001276320) (3) Data frame sent I0603 14:22:26.758984 6 log.go:172] (0xc0034c86e0) Data frame received for 3 I0603 14:22:26.758991 6 log.go:172] (0xc001276320) (3) Data frame handling I0603 14:22:26.759034 6 log.go:172] (0xc0034c86e0) Data frame received for 5 I0603 14:22:26.759101 6 log.go:172] (0xc0012763c0) (5) Data frame handling I0603 14:22:26.760763 6 log.go:172] (0xc0034c86e0) Data frame received for 1 I0603 14:22:26.760787 6 log.go:172] (0xc002e7e000) (1) Data frame handling I0603 14:22:26.760816 6 log.go:172] (0xc002e7e000) (1) Data frame sent I0603 14:22:26.760834 6 log.go:172] (0xc0034c86e0) (0xc002e7e000) Stream removed, broadcasting: 1 I0603 14:22:26.760919 6 log.go:172] (0xc0034c86e0) Go away received I0603 14:22:26.760945 6 log.go:172] (0xc0034c86e0) (0xc002e7e000) Stream removed, broadcasting: 1 I0603 14:22:26.760955 6 log.go:172] (0xc0034c86e0) (0xc001276320) Stream removed, broadcasting: 3 I0603 14:22:26.760965 6 log.go:172] (0xc0034c86e0) (0xc0012763c0) Stream removed, broadcasting: 5 Jun 3 14:22:26.760: INFO: Exec stderr: "" Jun 3 14:22:26.761: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.761: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.795724 6 log.go:172] (0xc000dcad10) (0xc001276780) Create stream I0603 14:22:26.795752 6 log.go:172] (0xc000dcad10) (0xc001276780) Stream added, broadcasting: 1 I0603 14:22:26.798071 6 log.go:172] (0xc000dcad10) Reply frame received for 1 I0603 14:22:26.798105 6 log.go:172] (0xc000dcad10) (0xc001276a00) Create stream I0603 14:22:26.798112 6 log.go:172] (0xc000dcad10) (0xc001276a00) Stream added, broadcasting: 3 I0603 14:22:26.799270 6 log.go:172] (0xc000dcad10) Reply frame received for 3 I0603 14:22:26.799299 6 log.go:172] (0xc000dcad10) (0xc002276280) Create stream I0603 14:22:26.799315 6 log.go:172] (0xc000dcad10) (0xc002276280) Stream added, broadcasting: 5 I0603 14:22:26.800419 6 log.go:172] (0xc000dcad10) Reply frame received for 5 I0603 14:22:26.864229 6 log.go:172] (0xc000dcad10) Data frame received for 3 I0603 14:22:26.864277 6 log.go:172] (0xc001276a00) (3) Data frame handling I0603 14:22:26.864302 6 log.go:172] (0xc001276a00) (3) Data frame sent I0603 14:22:26.864326 6 log.go:172] (0xc000dcad10) Data frame received for 3 I0603 14:22:26.864344 6 log.go:172] (0xc001276a00) (3) Data frame handling I0603 14:22:26.864393 6 log.go:172] (0xc000dcad10) Data frame received for 5 I0603 14:22:26.864415 6 log.go:172] (0xc002276280) (5) Data frame handling I0603 14:22:26.866384 6 log.go:172] (0xc000dcad10) Data frame received for 1 I0603 14:22:26.866415 6 log.go:172] (0xc001276780) (1) Data frame handling I0603 14:22:26.866432 6 log.go:172] (0xc001276780) (1) Data frame sent I0603 14:22:26.866456 6 log.go:172] (0xc000dcad10) (0xc001276780) Stream removed, broadcasting: 1 I0603 14:22:26.866477 6 log.go:172] (0xc000dcad10) Go away received I0603 14:22:26.866641 6 log.go:172] (0xc000dcad10) (0xc001276780) Stream removed, broadcasting: 1 I0603 14:22:26.866673 6 log.go:172] (0xc000dcad10) (0xc001276a00) Stream removed, broadcasting: 3 I0603 14:22:26.866687 6 log.go:172] (0xc000dcad10) (0xc002276280) Stream removed, broadcasting: 5 Jun 3 14:22:26.866: INFO: Exec stderr: "" Jun 3 14:22:26.866: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7556 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:22:26.866: INFO: >>> kubeConfig: /root/.kube/config I0603 14:22:26.905717 6 log.go:172] (0xc00079a840) (0xc0026e6320) Create stream I0603 14:22:26.905743 6 log.go:172] (0xc00079a840) (0xc0026e6320) Stream added, broadcasting: 1 I0603 14:22:26.908157 6 log.go:172] (0xc00079a840) Reply frame received for 1 I0603 14:22:26.908192 6 log.go:172] (0xc00079a840) (0xc001276b40) Create stream I0603 14:22:26.908203 6 log.go:172] (0xc00079a840) (0xc001276b40) Stream added, broadcasting: 3 I0603 14:22:26.909507 6 log.go:172] (0xc00079a840) Reply frame received for 3 I0603 14:22:26.909562 6 log.go:172] (0xc00079a840) (0xc002764000) Create stream I0603 14:22:26.909579 6 log.go:172] (0xc00079a840) (0xc002764000) Stream added, broadcasting: 5 I0603 14:22:26.910737 6 log.go:172] (0xc00079a840) Reply frame received for 5 I0603 14:22:26.968213 6 log.go:172] (0xc00079a840) Data frame received for 3 I0603 14:22:26.968233 6 log.go:172] (0xc001276b40) (3) Data frame handling I0603 14:22:26.968247 6 log.go:172] (0xc001276b40) (3) Data frame sent I0603 14:22:26.968251 6 log.go:172] (0xc00079a840) Data frame received for 3 I0603 14:22:26.968255 6 log.go:172] (0xc001276b40) (3) Data frame handling I0603 14:22:26.969651 6 log.go:172] (0xc00079a840) Data frame received for 5 I0603 14:22:26.969694 6 log.go:172] (0xc002764000) (5) Data frame handling I0603 14:22:26.974051 6 log.go:172] (0xc00079a840) Data frame received for 1 I0603 14:22:26.974078 6 log.go:172] (0xc0026e6320) (1) Data frame handling I0603 14:22:26.974097 6 log.go:172] (0xc0026e6320) (1) Data frame sent I0603 14:22:26.976100 6 log.go:172] (0xc00079a840) (0xc0026e6320) Stream removed, broadcasting: 1 I0603 14:22:26.976247 6 log.go:172] (0xc00079a840) (0xc0026e6320) Stream removed, broadcasting: 1 I0603 14:22:26.976287 6 log.go:172] (0xc00079a840) (0xc001276b40) Stream removed, broadcasting: 3 I0603 14:22:26.976467 6 log.go:172] (0xc00079a840) (0xc002764000) Stream removed, broadcasting: 5 Jun 3 14:22:26.976: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:22:26.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7556" for this suite. Jun 3 14:23:07.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:23:07.116: INFO: namespace e2e-kubelet-etc-hosts-7556 deletion completed in 40.135230019s • [SLOW TEST:53.610 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:23:07.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-5a05b199-20ab-4907-898f-36b60fd2f5e3 in namespace container-probe-5608 Jun 3 14:23:11.212: INFO: Started pod busybox-5a05b199-20ab-4907-898f-36b60fd2f5e3 in namespace container-probe-5608 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 14:23:11.215: INFO: Initial restart count of pod busybox-5a05b199-20ab-4907-898f-36b60fd2f5e3 is 0 Jun 3 14:24:01.393: INFO: Restart count of pod container-probe-5608/busybox-5a05b199-20ab-4907-898f-36b60fd2f5e3 is now 1 (50.178851446s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:24:01.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5608" for this suite. Jun 3 14:24:07.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:24:07.538: INFO: namespace container-probe-5608 deletion completed in 6.078007202s • [SLOW TEST:60.421 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:24:07.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 3 14:24:11.710: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 3 14:24:26.816: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:24:26.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2493" for this suite. Jun 3 14:24:32.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:24:32.914: INFO: namespace pods-2493 deletion completed in 6.090875469s • [SLOW TEST:25.376 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:24:32.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7d15f52b-350f-4754-800e-3401f3464e3a STEP: Creating a pod to test consume configMaps Jun 3 14:24:33.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd" in namespace "configmap-1834" to be "success or failure" Jun 3 14:24:33.026: INFO: Pod "pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.128802ms Jun 3 14:24:35.030: INFO: Pod "pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021520088s Jun 3 14:24:37.034: INFO: Pod "pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026025735s STEP: Saw pod success Jun 3 14:24:37.034: INFO: Pod "pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd" satisfied condition "success or failure" Jun 3 14:24:37.037: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd container configmap-volume-test: STEP: delete the pod Jun 3 14:24:37.055: INFO: Waiting for pod pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd to disappear Jun 3 14:24:37.060: INFO: Pod pod-configmaps-034fa627-2eaa-46e7-97cf-cf5e0bb0c5bd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:24:37.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1834" for this suite. Jun 3 14:24:43.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:24:43.210: INFO: namespace configmap-1834 deletion completed in 6.14521042s • [SLOW TEST:10.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:24:43.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 3 14:24:43.257: INFO: Waiting up to 5m0s for pod "pod-8da7be1b-603d-4c42-a65f-75af8c324611" in namespace "emptydir-6227" to be "success or failure" Jun 3 14:24:43.268: INFO: Pod "pod-8da7be1b-603d-4c42-a65f-75af8c324611": Phase="Pending", Reason="", readiness=false. Elapsed: 11.366219ms Jun 3 14:24:45.317: INFO: Pod "pod-8da7be1b-603d-4c42-a65f-75af8c324611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05973151s Jun 3 14:24:47.321: INFO: Pod "pod-8da7be1b-603d-4c42-a65f-75af8c324611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064457362s STEP: Saw pod success Jun 3 14:24:47.321: INFO: Pod "pod-8da7be1b-603d-4c42-a65f-75af8c324611" satisfied condition "success or failure" Jun 3 14:24:47.325: INFO: Trying to get logs from node iruya-worker pod pod-8da7be1b-603d-4c42-a65f-75af8c324611 container test-container: STEP: delete the pod Jun 3 14:24:47.349: INFO: Waiting for pod pod-8da7be1b-603d-4c42-a65f-75af8c324611 to disappear Jun 3 14:24:47.352: INFO: Pod pod-8da7be1b-603d-4c42-a65f-75af8c324611 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:24:47.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6227" for this suite. Jun 3 14:24:53.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:24:53.454: INFO: namespace emptydir-6227 deletion completed in 6.099400072s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:24:53.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257 Jun 3 14:24:53.631: INFO: Pod name my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257: Found 0 pods out of 1 Jun 3 14:24:58.635: INFO: Pod name my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257: Found 1 pods out of 1 Jun 3 14:24:58.635: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257" are running Jun 3 14:24:58.638: INFO: Pod "my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257-mmpfw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 14:24:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 14:24:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 14:24:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 14:24:53 +0000 UTC Reason: Message:}]) Jun 3 14:24:58.638: INFO: Trying to dial the pod Jun 3 14:25:03.656: INFO: Controller my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257: Got expected result from replica 1 [my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257-mmpfw]: "my-hostname-basic-92a7f46a-d0cb-423a-9e74-1af6c1d00257-mmpfw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:25:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7803" for this suite. Jun 3 14:25:09.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:25:09.756: INFO: namespace replication-controller-7803 deletion completed in 6.096528097s • [SLOW TEST:16.302 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:25:09.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-b2bdedc5-88aa-4cae-85ac-446ebbca7d4f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:25:15.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5221" for this suite. Jun 3 14:25:37.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:25:38.067: INFO: namespace configmap-5221 deletion completed in 22.124017111s • [SLOW TEST:28.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:25:38.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-tm6k STEP: Creating a pod to test atomic-volume-subpath Jun 3 14:25:38.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tm6k" in namespace "subpath-7609" to be "success or failure" Jun 3 14:25:38.200: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.617612ms Jun 3 14:25:40.204: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022485974s Jun 3 14:25:42.208: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 4.02706904s Jun 3 14:25:44.212: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 6.031064714s Jun 3 14:25:46.217: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 8.03544808s Jun 3 14:25:48.222: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 10.040216287s Jun 3 14:25:50.226: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 12.044445312s Jun 3 14:25:52.231: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 14.049763833s Jun 3 14:25:54.235: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 16.054136721s Jun 3 14:25:56.242: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 18.060727503s Jun 3 14:25:58.247: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 20.065801556s Jun 3 14:26:00.251: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Running", Reason="", readiness=true. Elapsed: 22.070070977s Jun 3 14:26:02.256: INFO: Pod "pod-subpath-test-downwardapi-tm6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074776688s STEP: Saw pod success Jun 3 14:26:02.256: INFO: Pod "pod-subpath-test-downwardapi-tm6k" satisfied condition "success or failure" Jun 3 14:26:02.259: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-tm6k container test-container-subpath-downwardapi-tm6k: STEP: delete the pod Jun 3 14:26:02.307: INFO: Waiting for pod pod-subpath-test-downwardapi-tm6k to disappear Jun 3 14:26:02.319: INFO: Pod pod-subpath-test-downwardapi-tm6k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-tm6k Jun 3 14:26:02.319: INFO: Deleting pod "pod-subpath-test-downwardapi-tm6k" in namespace "subpath-7609" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:26:02.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7609" for this suite. Jun 3 14:26:08.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:26:08.459: INFO: namespace subpath-7609 deletion completed in 6.135312349s • [SLOW TEST:30.392 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:26:08.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:26:08.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3" in namespace "projected-7290" to be "success or failure" Jun 3 14:26:08.535: INFO: Pod "downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.257268ms Jun 3 14:26:10.540: INFO: Pod "downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00800901s Jun 3 14:26:12.545: INFO: Pod "downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013336881s STEP: Saw pod success Jun 3 14:26:12.545: INFO: Pod "downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3" satisfied condition "success or failure" Jun 3 14:26:12.547: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3 container client-container: STEP: delete the pod Jun 3 14:26:12.609: INFO: Waiting for pod downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3 to disappear Jun 3 14:26:12.613: INFO: Pod downwardapi-volume-d50d8dfb-79db-4216-909e-ced06b971ed3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:26:12.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7290" for this suite. Jun 3 14:26:18.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:26:18.750: INFO: namespace projected-7290 deletion completed in 6.134425399s • [SLOW TEST:10.290 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:26:18.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 14:26:18.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:18.880: INFO: Number of nodes with available pods: 0 Jun 3 14:26:18.880: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:19.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:19.889: INFO: Number of nodes with available pods: 0 Jun 3 14:26:19.889: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:20.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:20.890: INFO: Number of nodes with available pods: 0 Jun 3 14:26:20.890: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:21.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:22.168: INFO: Number of nodes with available pods: 0 Jun 3 14:26:22.168: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:22.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:22.889: INFO: Number of nodes with available pods: 1 Jun 3 14:26:22.889: INFO: Node iruya-worker2 is running more than one daemon pod Jun 3 14:26:23.893: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:23.905: INFO: Number of nodes with available pods: 2 Jun 3 14:26:23.905: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 3 14:26:23.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:23.950: INFO: Number of nodes with available pods: 1 Jun 3 14:26:23.950: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:24.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:24.960: INFO: Number of nodes with available pods: 1 Jun 3 14:26:24.960: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:25.955: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:25.958: INFO: Number of nodes with available pods: 1 Jun 3 14:26:25.958: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:26.955: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:26.958: INFO: Number of nodes with available pods: 1 Jun 3 14:26:26.958: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:27.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:27.957: INFO: Number of nodes with available pods: 1 Jun 3 14:26:27.957: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:28.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:28.960: INFO: Number of nodes with available pods: 1 Jun 3 14:26:28.960: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:26:29.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 14:26:29.975: INFO: Number of nodes with available pods: 2 Jun 3 14:26:29.975: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6460, will wait for the garbage collector to delete the pods Jun 3 14:26:30.038: INFO: Deleting DaemonSet.extensions daemon-set took: 7.839424ms Jun 3 14:26:30.338: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265295ms Jun 3 14:26:42.246: INFO: Number of nodes with available pods: 0 Jun 3 14:26:42.246: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 14:26:42.248: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6460/daemonsets","resourceVersion":"14456036"},"items":null} Jun 3 14:26:42.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6460/pods","resourceVersion":"14456036"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:26:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6460" for this suite. Jun 3 14:26:48.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:26:48.380: INFO: namespace daemonsets-6460 deletion completed in 6.120165587s • [SLOW TEST:29.630 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:26:48.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:26:52.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2013" for this suite. Jun 3 14:27:34.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:27:34.592: INFO: namespace kubelet-test-2013 deletion completed in 42.116804032s • [SLOW TEST:46.211 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:27:34.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d46253fd-e813-4149-be74-0d605a63ff2a STEP: Creating a pod to test consume secrets Jun 3 14:27:34.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434" in namespace "projected-7218" to be "success or failure" Jun 3 14:27:34.693: INFO: Pod "pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434": Phase="Pending", Reason="", readiness=false. Elapsed: 9.258588ms Jun 3 14:27:36.697: INFO: Pod "pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013550947s Jun 3 14:27:38.702: INFO: Pod "pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018492683s STEP: Saw pod success Jun 3 14:27:38.702: INFO: Pod "pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434" satisfied condition "success or failure" Jun 3 14:27:38.706: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434 container projected-secret-volume-test: STEP: delete the pod Jun 3 14:27:38.757: INFO: Waiting for pod pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434 to disappear Jun 3 14:27:38.776: INFO: Pod pod-projected-secrets-71849c19-f428-4508-97e5-1933b0921434 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:27:38.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7218" for this suite. Jun 3 14:27:44.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:27:44.897: INFO: namespace projected-7218 deletion completed in 6.117991833s • [SLOW TEST:10.305 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:27:44.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c3811099-e640-4012-8c09-5ea38ef943b2 STEP: Creating a pod to test consume secrets Jun 3 14:27:45.042: INFO: Waiting up to 5m0s for pod "pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258" in namespace "secrets-6478" to be "success or failure" Jun 3 14:27:45.046: INFO: Pod "pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258": Phase="Pending", Reason="", readiness=false. Elapsed: 3.525611ms Jun 3 14:27:47.050: INFO: Pod "pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007468704s Jun 3 14:27:49.054: INFO: Pod "pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011442789s STEP: Saw pod success Jun 3 14:27:49.054: INFO: Pod "pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258" satisfied condition "success or failure" Jun 3 14:27:49.056: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258 container secret-volume-test: STEP: delete the pod Jun 3 14:27:49.220: INFO: Waiting for pod pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258 to disappear Jun 3 14:27:49.242: INFO: Pod pod-secrets-0b72f39f-442d-404d-9599-8d4cde786258 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:27:49.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6478" for this suite. Jun 3 14:27:55.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:27:55.366: INFO: namespace secrets-6478 deletion completed in 6.121068485s • [SLOW TEST:10.468 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:27:55.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-29bba0fc-038e-4f55-9a5a-b3462e367747 STEP: Creating configMap with name cm-test-opt-upd-59625b3f-97af-46e8-8b13-2c3b662ffd6b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-29bba0fc-038e-4f55-9a5a-b3462e367747 STEP: Updating configmap cm-test-opt-upd-59625b3f-97af-46e8-8b13-2c3b662ffd6b STEP: Creating configMap with name cm-test-opt-create-7b672c0f-7e8c-4562-921e-47a100618f5a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:28:03.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9710" for this suite. Jun 3 14:28:27.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:28:27.799: INFO: namespace configmap-9710 deletion completed in 24.088603079s • [SLOW TEST:32.432 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:28:27.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:28:32.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2117" for this suite. Jun 3 14:28:54.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:28:55.007: INFO: namespace replication-controller-2117 deletion completed in 22.094363064s • [SLOW TEST:27.208 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:28:55.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7776/configmap-test-4b93247e-1521-43df-a2f8-d9a17872953b STEP: Creating a pod to test consume configMaps Jun 3 14:28:55.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e" in namespace "configmap-7776" to be "success or failure" Jun 3 14:28:55.154: INFO: Pod "pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.466267ms Jun 3 14:28:57.158: INFO: Pod "pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02290536s Jun 3 14:28:59.176: INFO: Pod "pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040483912s STEP: Saw pod success Jun 3 14:28:59.176: INFO: Pod "pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e" satisfied condition "success or failure" Jun 3 14:28:59.178: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e container env-test: STEP: delete the pod Jun 3 14:28:59.215: INFO: Waiting for pod pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e to disappear Jun 3 14:28:59.217: INFO: Pod pod-configmaps-3b6b635e-e360-4098-9180-e0d7f380566e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:28:59.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7776" for this suite. Jun 3 14:29:05.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:29:05.340: INFO: namespace configmap-7776 deletion completed in 6.1198502s • [SLOW TEST:10.333 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:29:05.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-04129751-730a-46bf-a48d-cf3da7f32f46 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-04129751-730a-46bf-a48d-cf3da7f32f46 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:29:11.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-258" for this suite. Jun 3 14:29:33.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:29:33.629: INFO: namespace configmap-258 deletion completed in 22.113531341s • [SLOW TEST:28.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:29:33.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:29:33.715: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 3 14:29:33.722: INFO: Number of nodes with available pods: 0 Jun 3 14:29:33.722: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 3 14:29:33.769: INFO: Number of nodes with available pods: 0 Jun 3 14:29:33.769: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:34.773: INFO: Number of nodes with available pods: 0 Jun 3 14:29:34.773: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:35.773: INFO: Number of nodes with available pods: 0 Jun 3 14:29:35.773: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:36.773: INFO: Number of nodes with available pods: 0 Jun 3 14:29:36.773: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:37.773: INFO: Number of nodes with available pods: 1 Jun 3 14:29:37.773: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 3 14:29:37.810: INFO: Number of nodes with available pods: 1 Jun 3 14:29:37.810: INFO: Number of running nodes: 0, number of available pods: 1 Jun 3 14:29:38.814: INFO: Number of nodes with available pods: 0 Jun 3 14:29:38.815: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 3 14:29:38.824: INFO: Number of nodes with available pods: 0 Jun 3 14:29:38.824: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:39.830: INFO: Number of nodes with available pods: 0 Jun 3 14:29:39.830: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:40.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:40.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:41.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:41.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:42.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:42.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:43.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:43.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:44.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:44.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:45.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:45.830: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:46.830: INFO: Number of nodes with available pods: 0 Jun 3 14:29:46.830: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:47.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:47.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:48.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:48.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:49.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:49.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:50.830: INFO: Number of nodes with available pods: 0 Jun 3 14:29:50.830: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:51.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:51.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:52.830: INFO: Number of nodes with available pods: 0 Jun 3 14:29:52.830: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:53.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:53.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:54.829: INFO: Number of nodes with available pods: 0 Jun 3 14:29:54.829: INFO: Node iruya-worker is running more than one daemon pod Jun 3 14:29:55.828: INFO: Number of nodes with available pods: 1 Jun 3 14:29:55.828: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5557, will wait for the garbage collector to delete the pods Jun 3 14:29:55.894: INFO: Deleting DaemonSet.extensions daemon-set took: 6.189193ms Jun 3 14:29:56.194: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.269496ms Jun 3 14:30:02.230: INFO: Number of nodes with available pods: 0 Jun 3 14:30:02.231: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 14:30:02.233: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5557/daemonsets","resourceVersion":"14456703"},"items":null} Jun 3 14:30:02.236: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5557/pods","resourceVersion":"14456703"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:30:02.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5557" for this suite. Jun 3 14:30:08.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:30:08.363: INFO: namespace daemonsets-5557 deletion completed in 6.096494204s • [SLOW TEST:34.734 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:30:08.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 3 14:30:08.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-934' Jun 3 14:30:11.149: INFO: stderr: "" Jun 3 14:30:11.149: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 3 14:30:12.153: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:12.153: INFO: Found 0 / 1 Jun 3 14:30:13.154: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:13.154: INFO: Found 0 / 1 Jun 3 14:30:14.153: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:14.153: INFO: Found 0 / 1 Jun 3 14:30:15.154: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:15.154: INFO: Found 0 / 1 Jun 3 14:30:16.154: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:16.154: INFO: Found 1 / 1 Jun 3 14:30:16.154: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 14:30:16.156: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:16.156: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 3 14:30:16.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934' Jun 3 14:30:16.295: INFO: stderr: "" Jun 3 14:30:16.295: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Jun 14:30:14.777 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Jun 14:30:14.777 # Server started, Redis version 3.2.12\n1:M 03 Jun 14:30:14.777 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Jun 14:30:14.777 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 3 14:30:16.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934 --tail=1' Jun 3 14:30:16.428: INFO: stderr: "" Jun 3 14:30:16.428: INFO: stdout: "1:M 03 Jun 14:30:14.777 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 3 14:30:16.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934 --limit-bytes=1' Jun 3 14:30:16.529: INFO: stderr: "" Jun 3 14:30:16.529: INFO: stdout: " " STEP: exposing timestamps Jun 3 14:30:16.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934 --tail=1 --timestamps' Jun 3 14:30:16.640: INFO: stderr: "" Jun 3 14:30:16.640: INFO: stdout: "2020-06-03T14:30:14.777726104Z 1:M 03 Jun 14:30:14.777 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 3 14:30:19.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934 --since=1s' Jun 3 14:30:19.254: INFO: stderr: "" Jun 3 14:30:19.254: INFO: stdout: "" Jun 3 14:30:19.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5v4jm redis-master --namespace=kubectl-934 --since=24h' Jun 3 14:30:19.361: INFO: stderr: "" Jun 3 14:30:19.361: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Jun 14:30:14.777 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Jun 14:30:14.777 # Server started, Redis version 3.2.12\n1:M 03 Jun 14:30:14.777 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Jun 14:30:14.777 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 3 14:30:19.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-934' Jun 3 14:30:19.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 14:30:19.458: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 3 14:30:19.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-934' Jun 3 14:30:19.551: INFO: stderr: "No resources found.\n" Jun 3 14:30:19.551: INFO: stdout: "" Jun 3 14:30:19.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-934 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 14:30:19.643: INFO: stderr: "" Jun 3 14:30:19.643: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:30:19.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-934" for this suite. Jun 3 14:30:25.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:30:25.729: INFO: namespace kubectl-934 deletion completed in 6.083066792s • [SLOW TEST:17.366 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:30:25.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:30:25.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de" in namespace "projected-5926" to be "success or failure" Jun 3 14:30:25.789: INFO: Pod "downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184172ms Jun 3 14:30:27.797: INFO: Pod "downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011024973s Jun 3 14:30:29.802: INFO: Pod "downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015786767s STEP: Saw pod success Jun 3 14:30:29.802: INFO: Pod "downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de" satisfied condition "success or failure" Jun 3 14:30:29.805: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de container client-container: STEP: delete the pod Jun 3 14:30:29.861: INFO: Waiting for pod downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de to disappear Jun 3 14:30:29.876: INFO: Pod downwardapi-volume-8a0eac5a-c06a-45a3-8077-c574449787de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:30:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5926" for this suite. Jun 3 14:30:35.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:30:36.002: INFO: namespace projected-5926 deletion completed in 6.123521461s • [SLOW TEST:10.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:30:36.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:30:36.051: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 3 14:30:36.122: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 3 14:30:41.126: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 14:30:41.126: INFO: Creating deployment "test-rolling-update-deployment" Jun 3 14:30:41.131: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 3 14:30:41.174: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 3 14:30:43.182: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 3 14:30:43.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726791441, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726791441, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726791441, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726791441, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 14:30:45.188: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 3 14:30:45.198: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/deployments/test-rolling-update-deployment,UID:de9653fa-8d13-41cc-a37e-b45c840643ea,ResourceVersion:14456902,Generation:1,CreationTimestamp:2020-06-03 14:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-03 14:30:41 +0000 UTC 2020-06-03 14:30:41 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-03 14:30:44 +0000 UTC 2020-06-03 14:30:41 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 3 14:30:45.202: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:6b83ef65-f6e6-4c2a-baef-ec9ef574aa15,ResourceVersion:14456891,Generation:1,CreationTimestamp:2020-06-03 14:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de9653fa-8d13-41cc-a37e-b45c840643ea 0xc0026f46c7 0xc0026f46c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 3 14:30:45.202: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 3 14:30:45.202: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/replicasets/test-rolling-update-controller,UID:e1b3b906-b43f-402f-b9a0-07a65732cf50,ResourceVersion:14456900,Generation:2,CreationTimestamp:2020-06-03 14:30:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment de9653fa-8d13-41cc-a37e-b45c840643ea 0xc0026f45df 0xc0026f45f0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 3 14:30:45.206: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-k8c9f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-k8c9f,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/test-rolling-update-deployment-79f6b9d75c-k8c9f,UID:8497092b-67d8-4a6f-9565-aec5d21e411d,ResourceVersion:14456890,Generation:0,CreationTimestamp:2020-06-03 14:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 6b83ef65-f6e6-4c2a-baef-ec9ef574aa15 0xc0026f5a17 0xc0026f5a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d68c5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d68c5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d68c5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026f5c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026f5ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:30:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:30:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:30:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 14:30:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.29,StartTime:2020-06-03 14:30:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-03 14:30:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8a94616d2e3cecc0fd669556a1559fb7b95bbe72d72409abe06666c306565293}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:30:45.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4380" for this suite. Jun 3 14:30:51.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:30:51.464: INFO: namespace deployment-4380 deletion completed in 6.254521252s • [SLOW TEST:15.462 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:30:51.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 3 14:30:51.533: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix397989239/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:30:51.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8446" for this suite. Jun 3 14:30:57.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:30:57.722: INFO: namespace kubectl-8446 deletion completed in 6.100980043s • [SLOW TEST:6.256 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:30:57.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 3 14:30:57.788: INFO: namespace kubectl-2682 Jun 3 14:30:57.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2682' Jun 3 14:30:58.051: INFO: stderr: "" Jun 3 14:30:58.051: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 3 14:30:59.055: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:30:59.055: INFO: Found 0 / 1 Jun 3 14:31:00.087: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:31:00.087: INFO: Found 0 / 1 Jun 3 14:31:01.055: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:31:01.055: INFO: Found 0 / 1 Jun 3 14:31:02.055: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:31:02.055: INFO: Found 1 / 1 Jun 3 14:31:02.055: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 14:31:02.063: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:31:02.063: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 14:31:02.063: INFO: wait on redis-master startup in kubectl-2682 Jun 3 14:31:02.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-f2nkq redis-master --namespace=kubectl-2682' Jun 3 14:31:02.192: INFO: stderr: "" Jun 3 14:31:02.192: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Jun 14:31:00.853 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Jun 14:31:00.854 # Server started, Redis version 3.2.12\n1:M 03 Jun 14:31:00.854 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Jun 14:31:00.854 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 3 14:31:02.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2682' Jun 3 14:31:02.327: INFO: stderr: "" Jun 3 14:31:02.327: INFO: stdout: "service/rm2 exposed\n" Jun 3 14:31:02.335: INFO: Service rm2 in namespace kubectl-2682 found. STEP: exposing service Jun 3 14:31:04.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2682' Jun 3 14:31:04.472: INFO: stderr: "" Jun 3 14:31:04.472: INFO: stdout: "service/rm3 exposed\n" Jun 3 14:31:04.479: INFO: Service rm3 in namespace kubectl-2682 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:31:06.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2682" for this suite. Jun 3 14:31:28.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:31:28.580: INFO: namespace kubectl-2682 deletion completed in 22.089877085s • [SLOW TEST:30.858 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:31:28.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:31:28.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240" in namespace "projected-7537" to be "success or failure" Jun 3 14:31:28.687: INFO: Pod "downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240": Phase="Pending", Reason="", readiness=false. Elapsed: 19.078847ms Jun 3 14:31:30.777: INFO: Pod "downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109804539s Jun 3 14:31:32.782: INFO: Pod "downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114476536s STEP: Saw pod success Jun 3 14:31:32.782: INFO: Pod "downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240" satisfied condition "success or failure" Jun 3 14:31:32.785: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240 container client-container: STEP: delete the pod Jun 3 14:31:32.808: INFO: Waiting for pod downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240 to disappear Jun 3 14:31:32.828: INFO: Pod downwardapi-volume-b2cf077e-3408-42c5-a774-106672ebe240 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:31:32.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7537" for this suite. Jun 3 14:31:38.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:31:38.920: INFO: namespace projected-7537 deletion completed in 6.089030755s • [SLOW TEST:10.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:31:38.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-de55abbd-5579-4337-b6e5-c81386716570 STEP: Creating a pod to test consume configMaps Jun 3 14:31:39.067: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2" in namespace "projected-513" to be "success or failure" Jun 3 14:31:39.071: INFO: Pod "pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07704ms Jun 3 14:31:41.075: INFO: Pod "pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007229689s Jun 3 14:31:43.079: INFO: Pod "pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011492822s STEP: Saw pod success Jun 3 14:31:43.079: INFO: Pod "pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2" satisfied condition "success or failure" Jun 3 14:31:43.082: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2 container projected-configmap-volume-test: STEP: delete the pod Jun 3 14:31:43.102: INFO: Waiting for pod pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2 to disappear Jun 3 14:31:43.124: INFO: Pod pod-projected-configmaps-35a2ab37-4ca8-42e2-b554-028685f702b2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:31:43.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-513" for this suite. Jun 3 14:31:49.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:31:49.235: INFO: namespace projected-513 deletion completed in 6.107022157s • [SLOW TEST:10.314 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:31:49.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:31:53.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4590" for this suite. Jun 3 14:31:59.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:31:59.443: INFO: namespace kubelet-test-4590 deletion completed in 6.101876467s • [SLOW TEST:10.208 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:31:59.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-63c5edc6-f8d7-44e7-b489-2d204a80255e STEP: Creating secret with name s-test-opt-upd-2d2e93ab-1d9e-4da7-9502-e3939fa49ce9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-63c5edc6-f8d7-44e7-b489-2d204a80255e STEP: Updating secret s-test-opt-upd-2d2e93ab-1d9e-4da7-9502-e3939fa49ce9 STEP: Creating secret with name s-test-opt-create-498a6434-cee9-452d-87a4-280ce899aa12 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:32:09.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7619" for this suite. Jun 3 14:32:31.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:32:31.786: INFO: namespace projected-7619 deletion completed in 22.086623713s • [SLOW TEST:32.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:32:31.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 3 14:32:39.224: INFO: 2 pods remaining Jun 3 14:32:39.224: INFO: 0 pods has nil DeletionTimestamp Jun 3 14:32:39.224: INFO: Jun 3 14:32:39.918: INFO: 0 pods remaining Jun 3 14:32:39.918: INFO: 0 pods has nil DeletionTimestamp Jun 3 14:32:39.918: INFO: Jun 3 14:32:40.569: INFO: 0 pods remaining Jun 3 14:32:40.569: INFO: 0 pods has nil DeletionTimestamp Jun 3 14:32:40.569: INFO: STEP: Gathering metrics W0603 14:32:41.685271 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 14:32:41.685: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:32:41.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7605" for this suite. Jun 3 14:32:47.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:32:47.792: INFO: namespace gc-7605 deletion completed in 6.10439022s • [SLOW TEST:16.006 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:32:47.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e3316513-dc0a-41ba-96de-de9638763bec STEP: Creating a pod to test consume configMaps Jun 3 14:32:47.895: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862" in namespace "projected-2739" to be "success or failure" Jun 3 14:32:47.899: INFO: Pod "pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862": Phase="Pending", Reason="", readiness=false. Elapsed: 3.561793ms Jun 3 14:32:49.902: INFO: Pod "pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007161125s Jun 3 14:32:51.906: INFO: Pod "pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011214471s STEP: Saw pod success Jun 3 14:32:51.906: INFO: Pod "pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862" satisfied condition "success or failure" Jun 3 14:32:51.910: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862 container projected-configmap-volume-test: STEP: delete the pod Jun 3 14:32:51.933: INFO: Waiting for pod pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862 to disappear Jun 3 14:32:51.936: INFO: Pod pod-projected-configmaps-304af969-efd1-4e15-8b9a-7337e66ff862 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:32:51.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2739" for this suite. Jun 3 14:32:57.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:32:58.026: INFO: namespace projected-2739 deletion completed in 6.086326285s • [SLOW TEST:10.232 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:32:58.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8036e237-ef09-44b5-802b-84a3f4499229 STEP: Creating a pod to test consume configMaps Jun 3 14:32:58.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937" in namespace "configmap-4645" to be "success or failure" Jun 3 14:32:58.170: INFO: Pod "pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006833ms Jun 3 14:33:00.174: INFO: Pod "pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008366884s Jun 3 14:33:02.179: INFO: Pod "pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013175268s STEP: Saw pod success Jun 3 14:33:02.179: INFO: Pod "pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937" satisfied condition "success or failure" Jun 3 14:33:02.183: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937 container configmap-volume-test: STEP: delete the pod Jun 3 14:33:02.217: INFO: Waiting for pod pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937 to disappear Jun 3 14:33:02.224: INFO: Pod pod-configmaps-0c5178b6-618b-4c14-9736-e3b00b8a9937 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:33:02.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4645" for this suite. Jun 3 14:33:08.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:33:08.336: INFO: namespace configmap-4645 deletion completed in 6.108183324s • [SLOW TEST:10.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:33:08.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 3 14:33:14.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6d745833-8aed-476b-85d8-4a28f867b786 -c busybox-main-container --namespace=emptydir-5945 -- cat /usr/share/volumeshare/shareddata.txt' Jun 3 14:33:14.703: INFO: stderr: "I0603 14:33:14.600804 3636 log.go:172] (0xc00012a6e0) (0xc0003faa00) Create stream\nI0603 14:33:14.600887 3636 log.go:172] (0xc00012a6e0) (0xc0003faa00) Stream added, broadcasting: 1\nI0603 14:33:14.604493 3636 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0603 14:33:14.604544 3636 log.go:172] (0xc00012a6e0) (0xc0003fa000) Create stream\nI0603 14:33:14.604562 3636 log.go:172] (0xc00012a6e0) (0xc0003fa000) Stream added, broadcasting: 3\nI0603 14:33:14.606011 3636 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0603 14:33:14.606059 3636 log.go:172] (0xc00012a6e0) (0xc0003fa140) Create stream\nI0603 14:33:14.606072 3636 log.go:172] (0xc00012a6e0) (0xc0003fa140) Stream added, broadcasting: 5\nI0603 14:33:14.606906 3636 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0603 14:33:14.695307 3636 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0603 14:33:14.695366 3636 log.go:172] (0xc0003fa140) (5) Data frame handling\nI0603 14:33:14.695410 3636 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0603 14:33:14.695448 3636 log.go:172] (0xc0003fa000) (3) Data frame handling\nI0603 14:33:14.695499 3636 log.go:172] (0xc0003fa000) (3) Data frame sent\nI0603 14:33:14.695544 3636 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0603 14:33:14.695566 3636 log.go:172] (0xc0003fa000) (3) Data frame handling\nI0603 14:33:14.697513 3636 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0603 14:33:14.697551 3636 log.go:172] (0xc0003faa00) (1) Data frame handling\nI0603 14:33:14.697575 3636 log.go:172] (0xc0003faa00) (1) Data frame sent\nI0603 14:33:14.697744 3636 log.go:172] (0xc00012a6e0) (0xc0003faa00) Stream removed, broadcasting: 1\nI0603 14:33:14.697771 3636 log.go:172] (0xc00012a6e0) Go away received\nI0603 14:33:14.698261 3636 log.go:172] (0xc00012a6e0) (0xc0003faa00) Stream removed, broadcasting: 1\nI0603 14:33:14.698284 3636 log.go:172] (0xc00012a6e0) (0xc0003fa000) Stream removed, broadcasting: 3\nI0603 14:33:14.698296 3636 log.go:172] (0xc00012a6e0) (0xc0003fa140) Stream removed, broadcasting: 5\n" Jun 3 14:33:14.703: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:33:14.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5945" for this suite. Jun 3 14:33:20.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:33:20.821: INFO: namespace emptydir-5945 deletion completed in 6.114378525s • [SLOW TEST:12.485 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:33:20.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3358/secret-test-623dbea8-715a-49ed-b949-4a25b4a1cd10 STEP: Creating a pod to test consume secrets Jun 3 14:33:20.894: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416" in namespace "secrets-3358" to be "success or failure" Jun 3 14:33:20.918: INFO: Pod "pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416": Phase="Pending", Reason="", readiness=false. Elapsed: 23.937332ms Jun 3 14:33:22.923: INFO: Pod "pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02872462s Jun 3 14:33:24.928: INFO: Pod "pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033318654s STEP: Saw pod success Jun 3 14:33:24.928: INFO: Pod "pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416" satisfied condition "success or failure" Jun 3 14:33:24.931: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416 container env-test: STEP: delete the pod Jun 3 14:33:24.999: INFO: Waiting for pod pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416 to disappear Jun 3 14:33:25.114: INFO: Pod pod-configmaps-f8ac41bb-7828-4813-b117-89af40880416 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:33:25.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3358" for this suite. Jun 3 14:33:31.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:33:31.264: INFO: namespace secrets-3358 deletion completed in 6.14522703s • [SLOW TEST:10.442 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:33:31.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:33:31.364: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:33:35.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8663" for this suite. Jun 3 14:34:13.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:34:13.570: INFO: namespace pods-8663 deletion completed in 38.110574387s • [SLOW TEST:42.305 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:34:13.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 3 14:34:13.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943" in namespace "projected-6171" to be "success or failure" Jun 3 14:34:13.655: INFO: Pod "downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.838173ms Jun 3 14:34:15.659: INFO: Pod "downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008929814s Jun 3 14:34:17.663: INFO: Pod "downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01297512s STEP: Saw pod success Jun 3 14:34:17.663: INFO: Pod "downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943" satisfied condition "success or failure" Jun 3 14:34:17.667: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943 container client-container: STEP: delete the pod Jun 3 14:34:17.697: INFO: Waiting for pod downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943 to disappear Jun 3 14:34:17.709: INFO: Pod downwardapi-volume-7cc41e12-516f-4f91-b293-6565b474a943 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:34:17.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6171" for this suite. Jun 3 14:34:23.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:34:23.805: INFO: namespace projected-6171 deletion completed in 6.093031598s • [SLOW TEST:10.235 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:34:23.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7860 I0603 14:34:23.878032 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7860, replica count: 1 I0603 14:34:24.928447 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 14:34:25.928740 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 14:34:26.928952 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 14:34:27.053: INFO: Created: latency-svc-gw4xr Jun 3 14:34:27.064: INFO: Got endpoints: latency-svc-gw4xr [34.888389ms] Jun 3 14:34:27.151: INFO: Created: latency-svc-msgfd Jun 3 14:34:27.154: INFO: Got endpoints: latency-svc-msgfd [90.584763ms] Jun 3 14:34:27.183: INFO: Created: latency-svc-xl6df Jun 3 14:34:27.207: INFO: Got endpoints: latency-svc-xl6df [142.60005ms] Jun 3 14:34:27.231: INFO: Created: latency-svc-qx65q Jun 3 14:34:27.249: INFO: Got endpoints: latency-svc-qx65q [185.319179ms] Jun 3 14:34:27.300: INFO: Created: latency-svc-p96w5 Jun 3 14:34:27.303: INFO: Got endpoints: latency-svc-p96w5 [239.025317ms] Jun 3 14:34:27.328: INFO: Created: latency-svc-pt9hl Jun 3 14:34:27.352: INFO: Got endpoints: latency-svc-pt9hl [288.31748ms] Jun 3 14:34:27.392: INFO: Created: latency-svc-nzw7j Jun 3 14:34:27.438: INFO: Got endpoints: latency-svc-nzw7j [374.000659ms] Jun 3 14:34:27.458: INFO: Created: latency-svc-nlwtl Jun 3 14:34:27.472: INFO: Got endpoints: latency-svc-nlwtl [407.70118ms] Jun 3 14:34:27.488: INFO: Created: latency-svc-n5jwh Jun 3 14:34:27.520: INFO: Got endpoints: latency-svc-n5jwh [456.014921ms] Jun 3 14:34:27.576: INFO: Created: latency-svc-f7m85 Jun 3 14:34:27.578: INFO: Got endpoints: latency-svc-f7m85 [513.707594ms] Jun 3 14:34:27.605: INFO: Created: latency-svc-tq69s Jun 3 14:34:27.652: INFO: Got endpoints: latency-svc-tq69s [588.153253ms] Jun 3 14:34:27.714: INFO: Created: latency-svc-hv8nv Jun 3 14:34:27.716: INFO: Got endpoints: latency-svc-hv8nv [652.370085ms] Jun 3 14:34:27.761: INFO: Created: latency-svc-rtvvg Jun 3 14:34:27.775: INFO: Got endpoints: latency-svc-rtvvg [710.510696ms] Jun 3 14:34:27.857: INFO: Created: latency-svc-bzpb8 Jun 3 14:34:27.862: INFO: Got endpoints: latency-svc-bzpb8 [798.202565ms] Jun 3 14:34:27.884: INFO: Created: latency-svc-cglhh Jun 3 14:34:27.899: INFO: Got endpoints: latency-svc-cglhh [835.326255ms] Jun 3 14:34:27.920: INFO: Created: latency-svc-zjptq Jun 3 14:34:27.936: INFO: Got endpoints: latency-svc-zjptq [871.819844ms] Jun 3 14:34:27.957: INFO: Created: latency-svc-22k25 Jun 3 14:34:27.994: INFO: Got endpoints: latency-svc-22k25 [839.839606ms] Jun 3 14:34:28.018: INFO: Created: latency-svc-ltcf8 Jun 3 14:34:28.046: INFO: Got endpoints: latency-svc-ltcf8 [838.876547ms] Jun 3 14:34:28.073: INFO: Created: latency-svc-rw9p8 Jun 3 14:34:28.132: INFO: Got endpoints: latency-svc-rw9p8 [883.098592ms] Jun 3 14:34:28.160: INFO: Created: latency-svc-ps5dj Jun 3 14:34:28.171: INFO: Got endpoints: latency-svc-ps5dj [867.558382ms] Jun 3 14:34:28.202: INFO: Created: latency-svc-s9bt5 Jun 3 14:34:28.214: INFO: Got endpoints: latency-svc-s9bt5 [861.095079ms] Jun 3 14:34:28.295: INFO: Created: latency-svc-t2smg Jun 3 14:34:28.299: INFO: Got endpoints: latency-svc-t2smg [860.772352ms] Jun 3 14:34:28.338: INFO: Created: latency-svc-xlcsm Jun 3 14:34:28.358: INFO: Got endpoints: latency-svc-xlcsm [885.941086ms] Jun 3 14:34:28.395: INFO: Created: latency-svc-p5xtr Jun 3 14:34:28.432: INFO: Got endpoints: latency-svc-p5xtr [911.590173ms] Jun 3 14:34:28.448: INFO: Created: latency-svc-fm4s8 Jun 3 14:34:28.460: INFO: Got endpoints: latency-svc-fm4s8 [882.043895ms] Jun 3 14:34:28.479: INFO: Created: latency-svc-tlfbb Jun 3 14:34:28.490: INFO: Got endpoints: latency-svc-tlfbb [838.033765ms] Jun 3 14:34:28.522: INFO: Created: latency-svc-jqlgr Jun 3 14:34:28.570: INFO: Got endpoints: latency-svc-jqlgr [853.129439ms] Jun 3 14:34:28.576: INFO: Created: latency-svc-jzvfj Jun 3 14:34:28.593: INFO: Got endpoints: latency-svc-jzvfj [818.184393ms] Jun 3 14:34:28.616: INFO: Created: latency-svc-dmvr4 Jun 3 14:34:28.629: INFO: Got endpoints: latency-svc-dmvr4 [766.732806ms] Jun 3 14:34:28.660: INFO: Created: latency-svc-thn4l Jun 3 14:34:28.719: INFO: Got endpoints: latency-svc-thn4l [819.614036ms] Jun 3 14:34:28.732: INFO: Created: latency-svc-xjmnl Jun 3 14:34:28.744: INFO: Got endpoints: latency-svc-xjmnl [807.460073ms] Jun 3 14:34:28.768: INFO: Created: latency-svc-ppfm8 Jun 3 14:34:28.780: INFO: Got endpoints: latency-svc-ppfm8 [785.627196ms] Jun 3 14:34:28.877: INFO: Created: latency-svc-l7wrt Jun 3 14:34:28.879: INFO: Got endpoints: latency-svc-l7wrt [833.473265ms] Jun 3 14:34:28.928: INFO: Created: latency-svc-g7mbb Jun 3 14:34:28.944: INFO: Got endpoints: latency-svc-g7mbb [811.831224ms] Jun 3 14:34:28.960: INFO: Created: latency-svc-b95kc Jun 3 14:34:28.973: INFO: Got endpoints: latency-svc-b95kc [802.491735ms] Jun 3 14:34:29.025: INFO: Created: latency-svc-h7wjf Jun 3 14:34:29.040: INFO: Got endpoints: latency-svc-h7wjf [826.657661ms] Jun 3 14:34:29.072: INFO: Created: latency-svc-sp5q4 Jun 3 14:34:29.087: INFO: Got endpoints: latency-svc-sp5q4 [788.656346ms] Jun 3 14:34:29.120: INFO: Created: latency-svc-nqftl Jun 3 14:34:29.162: INFO: Got endpoints: latency-svc-nqftl [804.469882ms] Jun 3 14:34:29.186: INFO: Created: latency-svc-2622s Jun 3 14:34:29.202: INFO: Got endpoints: latency-svc-2622s [770.550102ms] Jun 3 14:34:29.220: INFO: Created: latency-svc-z2t69 Jun 3 14:34:29.226: INFO: Got endpoints: latency-svc-z2t69 [765.896244ms] Jun 3 14:34:29.242: INFO: Created: latency-svc-vk9dd Jun 3 14:34:29.250: INFO: Got endpoints: latency-svc-vk9dd [759.90095ms] Jun 3 14:34:29.319: INFO: Created: latency-svc-kwntn Jun 3 14:34:29.322: INFO: Got endpoints: latency-svc-kwntn [751.874944ms] Jun 3 14:34:29.354: INFO: Created: latency-svc-622j8 Jun 3 14:34:29.372: INFO: Got endpoints: latency-svc-622j8 [778.873736ms] Jun 3 14:34:29.402: INFO: Created: latency-svc-gcdrn Jun 3 14:34:29.414: INFO: Got endpoints: latency-svc-gcdrn [785.023322ms] Jun 3 14:34:29.480: INFO: Created: latency-svc-dcwx7 Jun 3 14:34:29.485: INFO: Got endpoints: latency-svc-dcwx7 [766.249041ms] Jun 3 14:34:29.507: INFO: Created: latency-svc-58rb2 Jun 3 14:34:29.516: INFO: Got endpoints: latency-svc-58rb2 [772.390999ms] Jun 3 14:34:29.546: INFO: Created: latency-svc-4z5vn Jun 3 14:34:29.558: INFO: Got endpoints: latency-svc-4z5vn [778.235842ms] Jun 3 14:34:29.642: INFO: Created: latency-svc-zjw5z Jun 3 14:34:29.655: INFO: Got endpoints: latency-svc-zjw5z [775.473341ms] Jun 3 14:34:29.680: INFO: Created: latency-svc-mb588 Jun 3 14:34:29.710: INFO: Got endpoints: latency-svc-mb588 [765.549755ms] Jun 3 14:34:29.732: INFO: Created: latency-svc-wr6w6 Jun 3 14:34:29.797: INFO: Got endpoints: latency-svc-wr6w6 [824.386812ms] Jun 3 14:34:29.828: INFO: Created: latency-svc-pl96q Jun 3 14:34:29.854: INFO: Got endpoints: latency-svc-pl96q [813.354453ms] Jun 3 14:34:29.884: INFO: Created: latency-svc-g527l Jun 3 14:34:29.896: INFO: Got endpoints: latency-svc-g527l [808.166984ms] Jun 3 14:34:29.941: INFO: Created: latency-svc-885z2 Jun 3 14:34:29.950: INFO: Got endpoints: latency-svc-885z2 [788.066769ms] Jun 3 14:34:29.971: INFO: Created: latency-svc-vbj4g Jun 3 14:34:29.986: INFO: Got endpoints: latency-svc-vbj4g [783.629747ms] Jun 3 14:34:30.008: INFO: Created: latency-svc-p9pjw Jun 3 14:34:30.042: INFO: Got endpoints: latency-svc-p9pjw [816.345939ms] Jun 3 14:34:30.097: INFO: Created: latency-svc-24p5w Jun 3 14:34:30.107: INFO: Got endpoints: latency-svc-24p5w [856.438881ms] Jun 3 14:34:30.142: INFO: Created: latency-svc-cds78 Jun 3 14:34:30.155: INFO: Got endpoints: latency-svc-cds78 [833.395642ms] Jun 3 14:34:30.184: INFO: Created: latency-svc-rdwf5 Jun 3 14:34:30.246: INFO: Got endpoints: latency-svc-rdwf5 [874.348121ms] Jun 3 14:34:30.249: INFO: Created: latency-svc-7qkv6 Jun 3 14:34:30.258: INFO: Got endpoints: latency-svc-7qkv6 [843.324464ms] Jun 3 14:34:30.278: INFO: Created: latency-svc-n2qtv Jun 3 14:34:30.295: INFO: Got endpoints: latency-svc-n2qtv [809.030774ms] Jun 3 14:34:30.310: INFO: Created: latency-svc-5l5qb Jun 3 14:34:30.324: INFO: Got endpoints: latency-svc-5l5qb [808.276241ms] Jun 3 14:34:30.408: INFO: Created: latency-svc-r78d2 Jun 3 14:34:30.411: INFO: Got endpoints: latency-svc-r78d2 [852.421345ms] Jun 3 14:34:30.487: INFO: Created: latency-svc-zf7qm Jun 3 14:34:30.499: INFO: Got endpoints: latency-svc-zf7qm [844.118434ms] Jun 3 14:34:30.559: INFO: Created: latency-svc-2nms5 Jun 3 14:34:30.572: INFO: Got endpoints: latency-svc-2nms5 [861.692982ms] Jun 3 14:34:30.598: INFO: Created: latency-svc-2cmfl Jun 3 14:34:30.608: INFO: Got endpoints: latency-svc-2cmfl [810.72938ms] Jun 3 14:34:30.640: INFO: Created: latency-svc-nz585 Jun 3 14:34:30.655: INFO: Got endpoints: latency-svc-nz585 [801.794266ms] Jun 3 14:34:30.701: INFO: Created: latency-svc-mpwk8 Jun 3 14:34:30.710: INFO: Got endpoints: latency-svc-mpwk8 [814.627978ms] Jun 3 14:34:30.739: INFO: Created: latency-svc-m74kl Jun 3 14:34:30.758: INFO: Got endpoints: latency-svc-m74kl [807.680556ms] Jun 3 14:34:30.795: INFO: Created: latency-svc-qd8fs Jun 3 14:34:30.851: INFO: Got endpoints: latency-svc-qd8fs [864.737422ms] Jun 3 14:34:30.853: INFO: Created: latency-svc-wwzzv Jun 3 14:34:30.867: INFO: Got endpoints: latency-svc-wwzzv [824.232003ms] Jun 3 14:34:30.908: INFO: Created: latency-svc-ww5qs Jun 3 14:34:30.923: INFO: Got endpoints: latency-svc-ww5qs [816.359426ms] Jun 3 14:34:30.950: INFO: Created: latency-svc-mcn6t Jun 3 14:34:30.994: INFO: Got endpoints: latency-svc-mcn6t [839.390927ms] Jun 3 14:34:31.005: INFO: Created: latency-svc-dpmmb Jun 3 14:34:31.018: INFO: Got endpoints: latency-svc-dpmmb [771.313434ms] Jun 3 14:34:31.035: INFO: Created: latency-svc-k5lcq Jun 3 14:34:31.048: INFO: Got endpoints: latency-svc-k5lcq [790.300253ms] Jun 3 14:34:31.066: INFO: Created: latency-svc-jm6bg Jun 3 14:34:31.078: INFO: Got endpoints: latency-svc-jm6bg [783.532488ms] Jun 3 14:34:31.134: INFO: Created: latency-svc-f6wqs Jun 3 14:34:31.153: INFO: Got endpoints: latency-svc-f6wqs [828.933739ms] Jun 3 14:34:31.187: INFO: Created: latency-svc-xx2g8 Jun 3 14:34:31.222: INFO: Got endpoints: latency-svc-xx2g8 [810.812857ms] Jun 3 14:34:31.271: INFO: Created: latency-svc-q6rqt Jun 3 14:34:31.291: INFO: Got endpoints: latency-svc-q6rqt [792.469372ms] Jun 3 14:34:31.322: INFO: Created: latency-svc-sj65b Jun 3 14:34:31.335: INFO: Got endpoints: latency-svc-sj65b [763.633277ms] Jun 3 14:34:31.363: INFO: Created: latency-svc-hsp9s Jun 3 14:34:31.420: INFO: Got endpoints: latency-svc-hsp9s [811.996872ms] Jun 3 14:34:31.444: INFO: Created: latency-svc-n26wb Jun 3 14:34:31.456: INFO: Got endpoints: latency-svc-n26wb [800.486476ms] Jun 3 14:34:31.474: INFO: Created: latency-svc-dv2t6 Jun 3 14:34:31.486: INFO: Got endpoints: latency-svc-dv2t6 [775.738946ms] Jun 3 14:34:31.508: INFO: Created: latency-svc-z4fmz Jun 3 14:34:31.551: INFO: Got endpoints: latency-svc-z4fmz [792.978779ms] Jun 3 14:34:31.567: INFO: Created: latency-svc-7q569 Jun 3 14:34:31.577: INFO: Got endpoints: latency-svc-7q569 [726.092458ms] Jun 3 14:34:31.597: INFO: Created: latency-svc-ggs4t Jun 3 14:34:31.607: INFO: Got endpoints: latency-svc-ggs4t [740.711325ms] Jun 3 14:34:31.630: INFO: Created: latency-svc-b5rf4 Jun 3 14:34:31.644: INFO: Got endpoints: latency-svc-b5rf4 [720.719396ms] Jun 3 14:34:31.695: INFO: Created: latency-svc-br44c Jun 3 14:34:31.700: INFO: Got endpoints: latency-svc-br44c [705.671398ms] Jun 3 14:34:31.759: INFO: Created: latency-svc-g5lqs Jun 3 14:34:31.776: INFO: Got endpoints: latency-svc-g5lqs [758.638578ms] Jun 3 14:34:31.795: INFO: Created: latency-svc-zpgks Jun 3 14:34:31.839: INFO: Got endpoints: latency-svc-zpgks [790.891895ms] Jun 3 14:34:31.855: INFO: Created: latency-svc-4rzbl Jun 3 14:34:31.867: INFO: Got endpoints: latency-svc-4rzbl [788.672436ms] Jun 3 14:34:31.889: INFO: Created: latency-svc-j9kst Jun 3 14:34:31.904: INFO: Got endpoints: latency-svc-j9kst [750.168598ms] Jun 3 14:34:31.923: INFO: Created: latency-svc-2wczd Jun 3 14:34:31.964: INFO: Got endpoints: latency-svc-2wczd [742.754945ms] Jun 3 14:34:31.993: INFO: Created: latency-svc-c9tqx Jun 3 14:34:32.007: INFO: Got endpoints: latency-svc-c9tqx [715.413954ms] Jun 3 14:34:32.048: INFO: Created: latency-svc-z7f72 Jun 3 14:34:32.060: INFO: Got endpoints: latency-svc-z7f72 [724.773298ms] Jun 3 14:34:32.108: INFO: Created: latency-svc-j5vzl Jun 3 14:34:32.114: INFO: Got endpoints: latency-svc-j5vzl [693.985172ms] Jun 3 14:34:32.152: INFO: Created: latency-svc-7mfrt Jun 3 14:34:32.163: INFO: Got endpoints: latency-svc-7mfrt [706.573241ms] Jun 3 14:34:32.182: INFO: Created: latency-svc-ddkhl Jun 3 14:34:32.193: INFO: Got endpoints: latency-svc-ddkhl [707.159588ms] Jun 3 14:34:32.240: INFO: Created: latency-svc-rkhm7 Jun 3 14:34:32.243: INFO: Got endpoints: latency-svc-rkhm7 [691.451022ms] Jun 3 14:34:32.281: INFO: Created: latency-svc-kfc4h Jun 3 14:34:32.290: INFO: Got endpoints: latency-svc-kfc4h [713.148083ms] Jun 3 14:34:32.326: INFO: Created: latency-svc-867cl Jun 3 14:34:32.338: INFO: Got endpoints: latency-svc-867cl [731.053144ms] Jun 3 14:34:32.384: INFO: Created: latency-svc-snxwg Jun 3 14:34:32.387: INFO: Got endpoints: latency-svc-snxwg [743.169329ms] Jun 3 14:34:32.419: INFO: Created: latency-svc-cww8t Jun 3 14:34:32.435: INFO: Got endpoints: latency-svc-cww8t [734.554696ms] Jun 3 14:34:32.455: INFO: Created: latency-svc-dqx6k Jun 3 14:34:32.465: INFO: Got endpoints: latency-svc-dqx6k [688.912656ms] Jun 3 14:34:32.536: INFO: Created: latency-svc-khxqk Jun 3 14:34:32.537: INFO: Got endpoints: latency-svc-khxqk [698.180155ms] Jun 3 14:34:32.583: INFO: Created: latency-svc-npqr8 Jun 3 14:34:32.607: INFO: Got endpoints: latency-svc-npqr8 [740.595531ms] Jun 3 14:34:32.677: INFO: Created: latency-svc-lmtlq Jun 3 14:34:32.689: INFO: Got endpoints: latency-svc-lmtlq [785.043317ms] Jun 3 14:34:32.713: INFO: Created: latency-svc-fhbxf Jun 3 14:34:32.724: INFO: Got endpoints: latency-svc-fhbxf [759.644004ms] Jun 3 14:34:32.746: INFO: Created: latency-svc-lmx2m Jun 3 14:34:32.760: INFO: Got endpoints: latency-svc-lmx2m [753.61455ms] Jun 3 14:34:32.815: INFO: Created: latency-svc-xlj92 Jun 3 14:34:32.845: INFO: Got endpoints: latency-svc-xlj92 [785.043912ms] Jun 3 14:34:32.846: INFO: Created: latency-svc-xp87x Jun 3 14:34:32.869: INFO: Got endpoints: latency-svc-xp87x [754.924161ms] Jun 3 14:34:32.899: INFO: Created: latency-svc-fhr78 Jun 3 14:34:32.911: INFO: Got endpoints: latency-svc-fhr78 [748.740176ms] Jun 3 14:34:32.965: INFO: Created: latency-svc-w72p6 Jun 3 14:34:32.984: INFO: Got endpoints: latency-svc-w72p6 [790.430538ms] Jun 3 14:34:33.015: INFO: Created: latency-svc-2lw8v Jun 3 14:34:33.026: INFO: Got endpoints: latency-svc-2lw8v [783.503954ms] Jun 3 14:34:33.045: INFO: Created: latency-svc-5dgsl Jun 3 14:34:33.056: INFO: Got endpoints: latency-svc-5dgsl [766.27895ms] Jun 3 14:34:33.114: INFO: Created: latency-svc-np6n4 Jun 3 14:34:33.118: INFO: Got endpoints: latency-svc-np6n4 [779.277359ms] Jun 3 14:34:33.151: INFO: Created: latency-svc-bnq67 Jun 3 14:34:33.159: INFO: Got endpoints: latency-svc-bnq67 [771.993983ms] Jun 3 14:34:33.182: INFO: Created: latency-svc-6kqdp Jun 3 14:34:33.190: INFO: Got endpoints: latency-svc-6kqdp [754.746013ms] Jun 3 14:34:33.208: INFO: Created: latency-svc-r8rkm Jun 3 14:34:33.246: INFO: Got endpoints: latency-svc-r8rkm [780.712586ms] Jun 3 14:34:33.256: INFO: Created: latency-svc-rcnrg Jun 3 14:34:33.282: INFO: Got endpoints: latency-svc-rcnrg [744.750673ms] Jun 3 14:34:33.307: INFO: Created: latency-svc-rjfvc Jun 3 14:34:33.331: INFO: Got endpoints: latency-svc-rjfvc [723.091107ms] Jun 3 14:34:33.400: INFO: Created: latency-svc-dcn5q Jun 3 14:34:33.437: INFO: Got endpoints: latency-svc-dcn5q [748.414214ms] Jun 3 14:34:33.454: INFO: Created: latency-svc-pp86z Jun 3 14:34:33.461: INFO: Got endpoints: latency-svc-pp86z [737.229626ms] Jun 3 14:34:33.513: INFO: Created: latency-svc-56l9n Jun 3 14:34:33.521: INFO: Got endpoints: latency-svc-56l9n [760.760466ms] Jun 3 14:34:33.553: INFO: Created: latency-svc-9wrc5 Jun 3 14:34:33.557: INFO: Got endpoints: latency-svc-9wrc5 [712.123302ms] Jun 3 14:34:33.577: INFO: Created: latency-svc-slzhb Jun 3 14:34:33.588: INFO: Got endpoints: latency-svc-slzhb [718.514286ms] Jun 3 14:34:33.607: INFO: Created: latency-svc-wcfw9 Jun 3 14:34:33.659: INFO: Got endpoints: latency-svc-wcfw9 [747.616809ms] Jun 3 14:34:33.669: INFO: Created: latency-svc-qfjxt Jun 3 14:34:33.703: INFO: Got endpoints: latency-svc-qfjxt [719.177998ms] Jun 3 14:34:33.735: INFO: Created: latency-svc-jbv2h Jun 3 14:34:33.757: INFO: Got endpoints: latency-svc-jbv2h [731.088926ms] Jun 3 14:34:33.809: INFO: Created: latency-svc-zp6hq Jun 3 14:34:33.813: INFO: Got endpoints: latency-svc-zp6hq [756.104112ms] Jun 3 14:34:33.853: INFO: Created: latency-svc-82n8d Jun 3 14:34:33.866: INFO: Got endpoints: latency-svc-82n8d [748.126348ms] Jun 3 14:34:33.885: INFO: Created: latency-svc-5bsh6 Jun 3 14:34:33.896: INFO: Got endpoints: latency-svc-5bsh6 [736.576475ms] Jun 3 14:34:33.953: INFO: Created: latency-svc-vdzqz Jun 3 14:34:33.956: INFO: Got endpoints: latency-svc-vdzqz [766.524532ms] Jun 3 14:34:33.979: INFO: Created: latency-svc-bb2jl Jun 3 14:34:33.993: INFO: Got endpoints: latency-svc-bb2jl [747.267846ms] Jun 3 14:34:34.014: INFO: Created: latency-svc-x2596 Jun 3 14:34:34.023: INFO: Got endpoints: latency-svc-x2596 [741.006239ms] Jun 3 14:34:34.046: INFO: Created: latency-svc-bj2jb Jun 3 14:34:34.115: INFO: Got endpoints: latency-svc-bj2jb [783.961757ms] Jun 3 14:34:34.117: INFO: Created: latency-svc-5x5zf Jun 3 14:34:34.131: INFO: Got endpoints: latency-svc-5x5zf [694.010544ms] Jun 3 14:34:34.177: INFO: Created: latency-svc-g2d2l Jun 3 14:34:34.192: INFO: Got endpoints: latency-svc-g2d2l [730.683846ms] Jun 3 14:34:34.275: INFO: Created: latency-svc-xc8n8 Jun 3 14:34:34.281: INFO: Got endpoints: latency-svc-xc8n8 [760.051868ms] Jun 3 14:34:34.324: INFO: Created: latency-svc-zfpdr Jun 3 14:34:34.336: INFO: Got endpoints: latency-svc-zfpdr [778.998753ms] Jun 3 14:34:34.369: INFO: Created: latency-svc-x85rv Jun 3 14:34:34.414: INFO: Got endpoints: latency-svc-x85rv [826.201467ms] Jun 3 14:34:34.441: INFO: Created: latency-svc-dj67z Jun 3 14:34:34.476: INFO: Got endpoints: latency-svc-dj67z [816.759624ms] Jun 3 14:34:34.501: INFO: Created: latency-svc-vjscm Jun 3 14:34:34.511: INFO: Got endpoints: latency-svc-vjscm [808.025307ms] Jun 3 14:34:34.552: INFO: Created: latency-svc-7j5gj Jun 3 14:34:34.554: INFO: Got endpoints: latency-svc-7j5gj [796.843191ms] Jun 3 14:34:34.581: INFO: Created: latency-svc-fskp2 Jun 3 14:34:34.596: INFO: Got endpoints: latency-svc-fskp2 [783.303026ms] Jun 3 14:34:34.627: INFO: Created: latency-svc-b7r8q Jun 3 14:34:34.638: INFO: Got endpoints: latency-svc-b7r8q [772.435979ms] Jun 3 14:34:34.683: INFO: Created: latency-svc-kbqwf Jun 3 14:34:34.686: INFO: Got endpoints: latency-svc-kbqwf [790.38111ms] Jun 3 14:34:34.711: INFO: Created: latency-svc-ncnzw Jun 3 14:34:34.723: INFO: Got endpoints: latency-svc-ncnzw [766.878634ms] Jun 3 14:34:34.743: INFO: Created: latency-svc-z4qfd Jun 3 14:34:34.759: INFO: Got endpoints: latency-svc-z4qfd [766.055467ms] Jun 3 14:34:34.780: INFO: Created: latency-svc-mrng4 Jun 3 14:34:34.851: INFO: Got endpoints: latency-svc-mrng4 [827.992428ms] Jun 3 14:34:34.867: INFO: Created: latency-svc-6xvk7 Jun 3 14:34:34.899: INFO: Got endpoints: latency-svc-6xvk7 [784.444653ms] Jun 3 14:34:34.936: INFO: Created: latency-svc-lcmjd Jun 3 14:34:34.983: INFO: Got endpoints: latency-svc-lcmjd [851.709473ms] Jun 3 14:34:35.017: INFO: Created: latency-svc-7mzlh Jun 3 14:34:35.031: INFO: Got endpoints: latency-svc-7mzlh [838.328363ms] Jun 3 14:34:35.059: INFO: Created: latency-svc-vmjck Jun 3 14:34:35.114: INFO: Got endpoints: latency-svc-vmjck [832.93392ms] Jun 3 14:34:35.127: INFO: Created: latency-svc-w29hx Jun 3 14:34:35.139: INFO: Got endpoints: latency-svc-w29hx [802.707507ms] Jun 3 14:34:35.158: INFO: Created: latency-svc-cvc5m Jun 3 14:34:35.169: INFO: Got endpoints: latency-svc-cvc5m [755.228326ms] Jun 3 14:34:35.188: INFO: Created: latency-svc-wj456 Jun 3 14:34:35.200: INFO: Got endpoints: latency-svc-wj456 [724.050509ms] Jun 3 14:34:35.247: INFO: Created: latency-svc-r7rn9 Jun 3 14:34:35.249: INFO: Got endpoints: latency-svc-r7rn9 [737.450265ms] Jun 3 14:34:35.275: INFO: Created: latency-svc-446kh Jun 3 14:34:35.285: INFO: Got endpoints: latency-svc-446kh [730.418315ms] Jun 3 14:34:35.305: INFO: Created: latency-svc-6nfg5 Jun 3 14:34:35.315: INFO: Got endpoints: latency-svc-6nfg5 [718.930319ms] Jun 3 14:34:35.331: INFO: Created: latency-svc-p4q42 Jun 3 14:34:35.346: INFO: Got endpoints: latency-svc-p4q42 [707.274365ms] Jun 3 14:34:35.397: INFO: Created: latency-svc-hzvzg Jun 3 14:34:35.399: INFO: Got endpoints: latency-svc-hzvzg [712.792585ms] Jun 3 14:34:35.446: INFO: Created: latency-svc-22p8j Jun 3 14:34:35.460: INFO: Got endpoints: latency-svc-22p8j [736.698409ms] Jun 3 14:34:35.478: INFO: Created: latency-svc-w5wsp Jun 3 14:34:35.491: INFO: Got endpoints: latency-svc-w5wsp [731.276197ms] Jun 3 14:34:35.540: INFO: Created: latency-svc-m8x64 Jun 3 14:34:35.543: INFO: Got endpoints: latency-svc-m8x64 [692.084818ms] Jun 3 14:34:35.703: INFO: Created: latency-svc-8v5tn Jun 3 14:34:35.705: INFO: Got endpoints: latency-svc-8v5tn [806.123397ms] Jun 3 14:34:35.749: INFO: Created: latency-svc-xjlwk Jun 3 14:34:35.774: INFO: Got endpoints: latency-svc-xjlwk [790.548989ms] Jun 3 14:34:35.794: INFO: Created: latency-svc-6k7cv Jun 3 14:34:35.839: INFO: Got endpoints: latency-svc-6k7cv [808.111091ms] Jun 3 14:34:35.859: INFO: Created: latency-svc-rms6z Jun 3 14:34:35.870: INFO: Got endpoints: latency-svc-rms6z [755.493065ms] Jun 3 14:34:35.890: INFO: Created: latency-svc-gkrsw Jun 3 14:34:35.900: INFO: Got endpoints: latency-svc-gkrsw [760.929143ms] Jun 3 14:34:35.917: INFO: Created: latency-svc-wxrgd Jun 3 14:34:35.930: INFO: Got endpoints: latency-svc-wxrgd [760.697312ms] Jun 3 14:34:35.971: INFO: Created: latency-svc-8wggm Jun 3 14:34:35.974: INFO: Got endpoints: latency-svc-8wggm [773.555982ms] Jun 3 14:34:35.995: INFO: Created: latency-svc-4qmwc Jun 3 14:34:36.003: INFO: Got endpoints: latency-svc-4qmwc [754.201006ms] Jun 3 14:34:36.021: INFO: Created: latency-svc-j8bp7 Jun 3 14:34:36.033: INFO: Got endpoints: latency-svc-j8bp7 [748.575504ms] Jun 3 14:34:36.051: INFO: Created: latency-svc-9kzfk Jun 3 14:34:36.064: INFO: Got endpoints: latency-svc-9kzfk [748.624889ms] Jun 3 14:34:36.109: INFO: Created: latency-svc-fm5w5 Jun 3 14:34:36.111: INFO: Got endpoints: latency-svc-fm5w5 [765.745921ms] Jun 3 14:34:36.163: INFO: Created: latency-svc-shv86 Jun 3 14:34:36.172: INFO: Got endpoints: latency-svc-shv86 [772.962012ms] Jun 3 14:34:36.205: INFO: Created: latency-svc-tq9r7 Jun 3 14:34:36.246: INFO: Got endpoints: latency-svc-tq9r7 [786.222481ms] Jun 3 14:34:36.268: INFO: Created: latency-svc-qz7js Jun 3 14:34:36.288: INFO: Got endpoints: latency-svc-qz7js [797.470563ms] Jun 3 14:34:36.319: INFO: Created: latency-svc-lgbtq Jun 3 14:34:36.384: INFO: Got endpoints: latency-svc-lgbtq [840.945292ms] Jun 3 14:34:36.412: INFO: Created: latency-svc-29kwt Jun 3 14:34:36.426: INFO: Got endpoints: latency-svc-29kwt [720.907524ms] Jun 3 14:34:36.454: INFO: Created: latency-svc-q2tk4 Jun 3 14:34:36.474: INFO: Got endpoints: latency-svc-q2tk4 [700.34554ms] Jun 3 14:34:36.541: INFO: Created: latency-svc-pxkvh Jun 3 14:34:36.542: INFO: Got endpoints: latency-svc-pxkvh [703.620294ms] Jun 3 14:34:36.565: INFO: Created: latency-svc-l6wpp Jun 3 14:34:36.577: INFO: Got endpoints: latency-svc-l6wpp [706.534864ms] Jun 3 14:34:36.620: INFO: Created: latency-svc-c82rh Jun 3 14:34:36.671: INFO: Got endpoints: latency-svc-c82rh [771.064154ms] Jun 3 14:34:36.681: INFO: Created: latency-svc-pdjb4 Jun 3 14:34:36.698: INFO: Got endpoints: latency-svc-pdjb4 [767.347009ms] Jun 3 14:34:36.717: INFO: Created: latency-svc-lzswk Jun 3 14:34:36.734: INFO: Got endpoints: latency-svc-lzswk [760.175654ms] Jun 3 14:34:36.815: INFO: Created: latency-svc-9wrt4 Jun 3 14:34:36.818: INFO: Got endpoints: latency-svc-9wrt4 [815.274432ms] Jun 3 14:34:36.840: INFO: Created: latency-svc-fn4bl Jun 3 14:34:36.854: INFO: Got endpoints: latency-svc-fn4bl [820.834007ms] Jun 3 14:34:36.883: INFO: Created: latency-svc-5jl7p Jun 3 14:34:36.897: INFO: Got endpoints: latency-svc-5jl7p [833.038633ms] Jun 3 14:34:36.960: INFO: Created: latency-svc-s57bc Jun 3 14:34:36.961: INFO: Got endpoints: latency-svc-s57bc [849.904918ms] Jun 3 14:34:37.121: INFO: Created: latency-svc-2tbl8 Jun 3 14:34:37.126: INFO: Got endpoints: latency-svc-2tbl8 [953.345575ms] Jun 3 14:34:37.155: INFO: Created: latency-svc-gr9sp Jun 3 14:34:37.167: INFO: Got endpoints: latency-svc-gr9sp [921.131838ms] Jun 3 14:34:37.186: INFO: Created: latency-svc-ch7zr Jun 3 14:34:37.198: INFO: Got endpoints: latency-svc-ch7zr [909.461619ms] Jun 3 14:34:37.215: INFO: Created: latency-svc-zhhz8 Jun 3 14:34:37.258: INFO: Got endpoints: latency-svc-zhhz8 [874.05552ms] Jun 3 14:34:37.272: INFO: Created: latency-svc-lj645 Jun 3 14:34:37.289: INFO: Got endpoints: latency-svc-lj645 [862.261804ms] Jun 3 14:34:37.308: INFO: Created: latency-svc-8qtcx Jun 3 14:34:37.318: INFO: Got endpoints: latency-svc-8qtcx [844.27989ms] Jun 3 14:34:37.345: INFO: Created: latency-svc-bx85r Jun 3 14:34:37.396: INFO: Got endpoints: latency-svc-bx85r [853.188957ms] Jun 3 14:34:37.406: INFO: Created: latency-svc-j4l7f Jun 3 14:34:37.439: INFO: Got endpoints: latency-svc-j4l7f [862.59711ms] Jun 3 14:34:37.473: INFO: Created: latency-svc-d84w8 Jun 3 14:34:37.489: INFO: Got endpoints: latency-svc-d84w8 [817.628158ms] Jun 3 14:34:37.528: INFO: Created: latency-svc-vt6t6 Jun 3 14:34:37.531: INFO: Got endpoints: latency-svc-vt6t6 [833.246182ms] Jun 3 14:34:37.560: INFO: Created: latency-svc-gkwh9 Jun 3 14:34:37.572: INFO: Got endpoints: latency-svc-gkwh9 [837.860723ms] Jun 3 14:34:37.572: INFO: Latencies: [90.584763ms 142.60005ms 185.319179ms 239.025317ms 288.31748ms 374.000659ms 407.70118ms 456.014921ms 513.707594ms 588.153253ms 652.370085ms 688.912656ms 691.451022ms 692.084818ms 693.985172ms 694.010544ms 698.180155ms 700.34554ms 703.620294ms 705.671398ms 706.534864ms 706.573241ms 707.159588ms 707.274365ms 710.510696ms 712.123302ms 712.792585ms 713.148083ms 715.413954ms 718.514286ms 718.930319ms 719.177998ms 720.719396ms 720.907524ms 723.091107ms 724.050509ms 724.773298ms 726.092458ms 730.418315ms 730.683846ms 731.053144ms 731.088926ms 731.276197ms 734.554696ms 736.576475ms 736.698409ms 737.229626ms 737.450265ms 740.595531ms 740.711325ms 741.006239ms 742.754945ms 743.169329ms 744.750673ms 747.267846ms 747.616809ms 748.126348ms 748.414214ms 748.575504ms 748.624889ms 748.740176ms 750.168598ms 751.874944ms 753.61455ms 754.201006ms 754.746013ms 754.924161ms 755.228326ms 755.493065ms 756.104112ms 758.638578ms 759.644004ms 759.90095ms 760.051868ms 760.175654ms 760.697312ms 760.760466ms 760.929143ms 763.633277ms 765.549755ms 765.745921ms 765.896244ms 766.055467ms 766.249041ms 766.27895ms 766.524532ms 766.732806ms 766.878634ms 767.347009ms 770.550102ms 771.064154ms 771.313434ms 771.993983ms 772.390999ms 772.435979ms 772.962012ms 773.555982ms 775.473341ms 775.738946ms 778.235842ms 778.873736ms 778.998753ms 779.277359ms 780.712586ms 783.303026ms 783.503954ms 783.532488ms 783.629747ms 783.961757ms 784.444653ms 785.023322ms 785.043317ms 785.043912ms 785.627196ms 786.222481ms 788.066769ms 788.656346ms 788.672436ms 790.300253ms 790.38111ms 790.430538ms 790.548989ms 790.891895ms 792.469372ms 792.978779ms 796.843191ms 797.470563ms 798.202565ms 800.486476ms 801.794266ms 802.491735ms 802.707507ms 804.469882ms 806.123397ms 807.460073ms 807.680556ms 808.025307ms 808.111091ms 808.166984ms 808.276241ms 809.030774ms 810.72938ms 810.812857ms 811.831224ms 811.996872ms 813.354453ms 814.627978ms 815.274432ms 816.345939ms 816.359426ms 816.759624ms 817.628158ms 818.184393ms 819.614036ms 820.834007ms 824.232003ms 824.386812ms 826.201467ms 826.657661ms 827.992428ms 828.933739ms 832.93392ms 833.038633ms 833.246182ms 833.395642ms 833.473265ms 835.326255ms 837.860723ms 838.033765ms 838.328363ms 838.876547ms 839.390927ms 839.839606ms 840.945292ms 843.324464ms 844.118434ms 844.27989ms 849.904918ms 851.709473ms 852.421345ms 853.129439ms 853.188957ms 856.438881ms 860.772352ms 861.095079ms 861.692982ms 862.261804ms 862.59711ms 864.737422ms 867.558382ms 871.819844ms 874.05552ms 874.348121ms 882.043895ms 883.098592ms 885.941086ms 909.461619ms 911.590173ms 921.131838ms 953.345575ms] Jun 3 14:34:37.572: INFO: 50 %ile: 778.873736ms Jun 3 14:34:37.572: INFO: 90 %ile: 853.129439ms Jun 3 14:34:37.572: INFO: 99 %ile: 921.131838ms Jun 3 14:34:37.572: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:34:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7860" for this suite. Jun 3 14:35:05.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:35:05.662: INFO: namespace svc-latency-7860 deletion completed in 28.084544106s • [SLOW TEST:41.857 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:35:05.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2736 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 14:35:05.745: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 3 14:35:33.847: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.41 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2736 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:35:33.847: INFO: >>> kubeConfig: /root/.kube/config I0603 14:35:33.876576 6 log.go:172] (0xc001b28420) (0xc0026e7180) Create stream I0603 14:35:33.876599 6 log.go:172] (0xc001b28420) (0xc0026e7180) Stream added, broadcasting: 1 I0603 14:35:33.878274 6 log.go:172] (0xc001b28420) Reply frame received for 1 I0603 14:35:33.878361 6 log.go:172] (0xc001b28420) (0xc0029ee3c0) Create stream I0603 14:35:33.878388 6 log.go:172] (0xc001b28420) (0xc0029ee3c0) Stream added, broadcasting: 3 I0603 14:35:33.879405 6 log.go:172] (0xc001b28420) Reply frame received for 3 I0603 14:35:33.879438 6 log.go:172] (0xc001b28420) (0xc002f1d680) Create stream I0603 14:35:33.879449 6 log.go:172] (0xc001b28420) (0xc002f1d680) Stream added, broadcasting: 5 I0603 14:35:33.880394 6 log.go:172] (0xc001b28420) Reply frame received for 5 I0603 14:35:34.993809 6 log.go:172] (0xc001b28420) Data frame received for 3 I0603 14:35:34.993850 6 log.go:172] (0xc0029ee3c0) (3) Data frame handling I0603 14:35:34.993903 6 log.go:172] (0xc0029ee3c0) (3) Data frame sent I0603 14:35:34.993958 6 log.go:172] (0xc001b28420) Data frame received for 3 I0603 14:35:34.993985 6 log.go:172] (0xc0029ee3c0) (3) Data frame handling I0603 14:35:34.994040 6 log.go:172] (0xc001b28420) Data frame received for 5 I0603 14:35:34.994072 6 log.go:172] (0xc002f1d680) (5) Data frame handling I0603 14:35:34.996114 6 log.go:172] (0xc001b28420) Data frame received for 1 I0603 14:35:34.996155 6 log.go:172] (0xc0026e7180) (1) Data frame handling I0603 14:35:34.996184 6 log.go:172] (0xc0026e7180) (1) Data frame sent I0603 14:35:34.996207 6 log.go:172] (0xc001b28420) (0xc0026e7180) Stream removed, broadcasting: 1 I0603 14:35:34.996227 6 log.go:172] (0xc001b28420) Go away received I0603 14:35:34.996398 6 log.go:172] (0xc001b28420) (0xc0026e7180) Stream removed, broadcasting: 1 I0603 14:35:34.996439 6 log.go:172] (0xc001b28420) (0xc0029ee3c0) Stream removed, broadcasting: 3 I0603 14:35:34.996488 6 log.go:172] (0xc001b28420) (0xc002f1d680) Stream removed, broadcasting: 5 Jun 3 14:35:34.996: INFO: Found all expected endpoints: [netserver-0] Jun 3 14:35:35.000: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.92 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2736 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 3 14:35:35.000: INFO: >>> kubeConfig: /root/.kube/config I0603 14:35:35.039024 6 log.go:172] (0xc001b28fd0) (0xc0026e7400) Create stream I0603 14:35:35.039055 6 log.go:172] (0xc001b28fd0) (0xc0026e7400) Stream added, broadcasting: 1 I0603 14:35:35.040737 6 log.go:172] (0xc001b28fd0) Reply frame received for 1 I0603 14:35:35.040773 6 log.go:172] (0xc001b28fd0) (0xc0027b6be0) Create stream I0603 14:35:35.040785 6 log.go:172] (0xc001b28fd0) (0xc0027b6be0) Stream added, broadcasting: 3 I0603 14:35:35.041751 6 log.go:172] (0xc001b28fd0) Reply frame received for 3 I0603 14:35:35.041803 6 log.go:172] (0xc001b28fd0) (0xc001a7aa00) Create stream I0603 14:35:35.041825 6 log.go:172] (0xc001b28fd0) (0xc001a7aa00) Stream added, broadcasting: 5 I0603 14:35:35.042614 6 log.go:172] (0xc001b28fd0) Reply frame received for 5 I0603 14:35:36.114832 6 log.go:172] (0xc001b28fd0) Data frame received for 5 I0603 14:35:36.114862 6 log.go:172] (0xc001a7aa00) (5) Data frame handling I0603 14:35:36.114882 6 log.go:172] (0xc001b28fd0) Data frame received for 3 I0603 14:35:36.114888 6 log.go:172] (0xc0027b6be0) (3) Data frame handling I0603 14:35:36.114901 6 log.go:172] (0xc0027b6be0) (3) Data frame sent I0603 14:35:36.115052 6 log.go:172] (0xc001b28fd0) Data frame received for 3 I0603 14:35:36.115070 6 log.go:172] (0xc0027b6be0) (3) Data frame handling I0603 14:35:36.118129 6 log.go:172] (0xc001b28fd0) Data frame received for 1 I0603 14:35:36.118155 6 log.go:172] (0xc0026e7400) (1) Data frame handling I0603 14:35:36.118182 6 log.go:172] (0xc0026e7400) (1) Data frame sent I0603 14:35:36.118308 6 log.go:172] (0xc001b28fd0) (0xc0026e7400) Stream removed, broadcasting: 1 I0603 14:35:36.118378 6 log.go:172] (0xc001b28fd0) (0xc0026e7400) Stream removed, broadcasting: 1 I0603 14:35:36.118425 6 log.go:172] (0xc001b28fd0) (0xc0027b6be0) Stream removed, broadcasting: 3 I0603 14:35:36.118440 6 log.go:172] (0xc001b28fd0) (0xc001a7aa00) Stream removed, broadcasting: 5 Jun 3 14:35:36.118: INFO: Found all expected endpoints: [netserver-1] I0603 14:35:36.118484 6 log.go:172] (0xc001b28fd0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:35:36.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2736" for this suite. Jun 3 14:36:00.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:36:00.224: INFO: namespace pod-network-test-2736 deletion completed in 24.102510215s • [SLOW TEST:54.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:36:00.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:36:00.324: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5dd73eee-e09d-4502-ac7b-12d084f5969f", Controller:(*bool)(0xc001684512), BlockOwnerDeletion:(*bool)(0xc001684513)}} Jun 3 14:36:00.358: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b862f846-db5f-4ab4-8f9a-f272f7a1e066", Controller:(*bool)(0xc00168482a), BlockOwnerDeletion:(*bool)(0xc00168482b)}} Jun 3 14:36:00.373: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1b70e002-5661-4f42-b412-26e6de9d25ef", Controller:(*bool)(0xc00139d302), BlockOwnerDeletion:(*bool)(0xc00139d303)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:36:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2389" for this suite. Jun 3 14:36:11.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:36:11.557: INFO: namespace gc-2389 deletion completed in 6.092572298s • [SLOW TEST:11.333 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:36:11.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-785.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-785.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-785.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-785.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-785.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.54.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.54.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.54.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.54.112_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-785.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-785.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-785.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-785.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-785.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-785.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-785.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.54.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.54.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.54.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.54.112_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 14:36:17.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.730: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.732: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.736: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:17.750: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:22.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.760: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.764: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.768: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.792: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.795: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.798: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.801: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:22.827: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:27.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.786: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.792: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.796: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:27.814: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:32.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.763: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.766: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.790: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:32.817: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:37.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.788: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.791: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:37.816: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:42.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.821: INFO: Unable to read jessie_udp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.828: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.831: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local from pod dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f: the server could not find the requested resource (get pods dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f) Jun 3 14:36:42.847: INFO: Lookups using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f failed for: [wheezy_udp@dns-test-service.dns-785.svc.cluster.local wheezy_tcp@dns-test-service.dns-785.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_udp@dns-test-service.dns-785.svc.cluster.local jessie_tcp@dns-test-service.dns-785.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-785.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-785.svc.cluster.local] Jun 3 14:36:47.819: INFO: DNS probes using dns-785/dns-test-c1853f58-1430-4a84-a739-e3720b5fc87f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:36:48.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-785" for this suite. Jun 3 14:36:54.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:36:54.704: INFO: namespace dns-785 deletion completed in 6.257078251s • [SLOW TEST:43.146 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:36:54.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 3 14:36:54.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6870' Jun 3 14:36:55.165: INFO: stderr: "" Jun 3 14:36:55.165: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 3 14:36:56.170: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:56.170: INFO: Found 0 / 1 Jun 3 14:36:57.284: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:57.284: INFO: Found 0 / 1 Jun 3 14:36:58.169: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:58.169: INFO: Found 0 / 1 Jun 3 14:36:59.170: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:59.170: INFO: Found 1 / 1 Jun 3 14:36:59.170: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 3 14:36:59.187: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:59.187: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 14:36:59.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vz2qs --namespace=kubectl-6870 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 3 14:36:59.288: INFO: stderr: "" Jun 3 14:36:59.288: INFO: stdout: "pod/redis-master-vz2qs patched\n" STEP: checking annotations Jun 3 14:36:59.290: INFO: Selector matched 1 pods for map[app:redis] Jun 3 14:36:59.291: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:36:59.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6870" for this suite. Jun 3 14:37:21.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:37:21.383: INFO: namespace kubectl-6870 deletion completed in 22.090173511s • [SLOW TEST:26.679 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:37:21.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:37:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1268" for this suite. Jun 3 14:37:32.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:37:33.077: INFO: namespace watch-1268 deletion completed in 6.185791614s • [SLOW TEST:11.693 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:37:33.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 3 14:37:37.696: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5333 pod-service-account-a43b8dee-6b6b-4390-8d9b-432e9c856cd8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 3 14:37:37.908: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5333 pod-service-account-a43b8dee-6b6b-4390-8d9b-432e9c856cd8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 3 14:37:38.105: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5333 pod-service-account-a43b8dee-6b6b-4390-8d9b-432e9c856cd8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:37:38.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5333" for this suite. Jun 3 14:37:44.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:37:44.407: INFO: namespace svcaccounts-5333 deletion completed in 6.093163802s • [SLOW TEST:11.329 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:37:44.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 3 14:37:44.478: INFO: Waiting up to 5m0s for pod "downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f" in namespace "downward-api-1726" to be "success or failure" Jun 3 14:37:44.482: INFO: Pod "downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954913ms Jun 3 14:37:46.486: INFO: Pod "downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007963182s Jun 3 14:37:48.491: INFO: Pod "downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012211891s STEP: Saw pod success Jun 3 14:37:48.491: INFO: Pod "downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f" satisfied condition "success or failure" Jun 3 14:37:48.494: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f container dapi-container: STEP: delete the pod Jun 3 14:37:48.544: INFO: Waiting for pod downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f to disappear Jun 3 14:37:48.662: INFO: Pod downward-api-f8baa78f-d8a3-4b49-a8c4-07c6bfa64a6f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:37:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1726" for this suite. Jun 3 14:37:54.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:37:54.759: INFO: namespace downward-api-1726 deletion completed in 6.093158045s • [SLOW TEST:10.352 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:37:54.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0603 14:38:25.373883 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 3 14:38:25.373: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:38:25.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3524" for this suite. Jun 3 14:38:31.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:38:31.639: INFO: namespace gc-3524 deletion completed in 6.262377726s • [SLOW TEST:36.880 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:38:31.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 3 14:38:31.676: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:38:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1625" for this suite. Jun 3 14:38:38.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:38:38.900: INFO: namespace custom-resource-definition-1625 deletion completed in 6.089718577s • [SLOW TEST:7.260 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 3 14:38:38.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-aae55f83-8e78-4368-acab-d3ff8cd8daed STEP: Creating a pod to test consume configMaps Jun 3 14:38:38.995: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868" in namespace "projected-9775" to be "success or failure" Jun 3 14:38:39.014: INFO: Pod "pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868": Phase="Pending", Reason="", readiness=false. Elapsed: 18.267484ms Jun 3 14:38:41.018: INFO: Pod "pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022893837s Jun 3 14:38:43.023: INFO: Pod "pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027288747s STEP: Saw pod success Jun 3 14:38:43.023: INFO: Pod "pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868" satisfied condition "success or failure" Jun 3 14:38:43.026: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868 container projected-configmap-volume-test: STEP: delete the pod Jun 3 14:38:43.088: INFO: Waiting for pod pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868 to disappear Jun 3 14:38:43.094: INFO: Pod pod-projected-configmaps-569875bb-4f67-49a5-8ba5-81995b619868 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 3 14:38:43.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9775" for this suite. Jun 3 14:38:49.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 3 14:38:49.182: INFO: namespace projected-9775 deletion completed in 6.084236967s • [SLOW TEST:10.282 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSJun 3 14:38:49.183: INFO: Running AfterSuite actions on all nodes Jun 3 14:38:49.183: INFO: Running AfterSuite actions on node 1 Jun 3 14:38:49.183: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6172.448 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS