I0629 12:55:57.221790 6 e2e.go:243] Starting e2e run "4854ca73-ad24-4e23-b955-7a339d8f45af" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593435356 - Will randomize all specs Will run 215 of 4412 specs Jun 29 12:55:57.408: INFO: >>> kubeConfig: /root/.kube/config Jun 29 12:55:57.411: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 29 12:55:57.432: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 29 12:55:57.470: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 29 12:55:57.470: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 29 12:55:57.470: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 29 12:55:57.478: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 29 12:55:57.478: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 29 12:55:57.478: INFO: e2e test version: v1.15.11 Jun 29 12:55:57.479: INFO: kube-apiserver version: v1.15.7 SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:55:57.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jun 29 12:55:57.534: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-7298aa7b-c8a4-4290-a5e4-6161efaeeb15 STEP: Creating a pod to test consume secrets Jun 29 12:55:57.545: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015" in namespace "projected-2853" to be "success or failure" Jun 29 12:55:57.547: INFO: Pod "pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015": Phase="Pending", Reason="", readiness=false. Elapsed: 1.556908ms Jun 29 12:55:59.551: INFO: Pod "pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005822908s Jun 29 12:56:01.555: INFO: Pod "pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009907824s STEP: Saw pod success Jun 29 12:56:01.555: INFO: Pod "pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015" satisfied condition "success or failure" Jun 29 12:56:01.558: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015 container projected-secret-volume-test: STEP: delete the pod Jun 29 12:56:01.742: INFO: Waiting for pod pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015 to disappear Jun 29 12:56:01.759: INFO: Pod pod-projected-secrets-799becd1-119b-4013-94d9-bb6ab2e06015 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:56:01.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2853" for this suite. Jun 29 12:56:07.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:56:07.867: INFO: namespace projected-2853 deletion completed in 6.104440943s • [SLOW TEST:10.389 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:56:07.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-27 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 29 12:56:08.010: INFO: Found 0 stateful pods, waiting for 3 Jun 29 12:56:18.165: INFO: Found 2 stateful pods, waiting for 3 Jun 29 12:56:28.321: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 12:56:28.321: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 12:56:28.321: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 29 12:56:28.344: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 29 12:56:38.906: INFO: Updating stateful set ss2 Jun 29 12:56:38.968: INFO: Waiting for Pod statefulset-27/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 12:56:48.978: INFO: Waiting for Pod statefulset-27/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 29 12:56:59.090: INFO: Found 2 stateful pods, waiting for 3 Jun 29 12:57:09.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 12:57:09.094: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 12:57:09.094: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 29 12:57:09.112: INFO: Updating stateful set ss2 Jun 29 12:57:09.159: INFO: Waiting for Pod statefulset-27/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 12:57:19.164: INFO: Waiting for Pod statefulset-27/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 12:57:29.184: INFO: Updating stateful set ss2 Jun 29 12:57:29.343: INFO: Waiting for StatefulSet statefulset-27/ss2 to complete update Jun 29 12:57:29.343: INFO: Waiting for Pod statefulset-27/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 29 12:57:39.522: INFO: Deleting all statefulset in ns statefulset-27 Jun 29 12:57:39.525: INFO: Scaling statefulset ss2 to 0 Jun 29 12:58:09.668: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 12:58:09.671: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:58:09.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-27" for this suite. Jun 29 12:58:19.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:58:19.858: INFO: namespace statefulset-27 deletion completed in 10.168719178s • [SLOW TEST:131.990 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:58:19.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ab7860cc-4245-4936-b9ea-d3069d319ceb STEP: Creating a pod to test consume configMaps Jun 29 12:58:20.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe" in namespace "configmap-7631" to be "success or failure" Jun 29 12:58:20.187: INFO: Pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe": Phase="Pending", Reason="", readiness=false. Elapsed: 15.595464ms Jun 29 12:58:22.191: INFO: Pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019305474s Jun 29 12:58:24.278: INFO: Pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe": Phase="Running", Reason="", readiness=true. Elapsed: 4.10630462s Jun 29 12:58:26.282: INFO: Pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110492508s STEP: Saw pod success Jun 29 12:58:26.282: INFO: Pod "pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe" satisfied condition "success or failure" Jun 29 12:58:26.285: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe container configmap-volume-test: STEP: delete the pod Jun 29 12:58:26.400: INFO: Waiting for pod pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe to disappear Jun 29 12:58:26.499: INFO: Pod pod-configmaps-ac6855cc-459f-4926-9e10-85b682a8edfe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:58:26.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7631" for this suite. Jun 29 12:58:32.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:58:32.647: INFO: namespace configmap-7631 deletion completed in 6.145013607s • [SLOW TEST:12.789 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:58:32.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 12:58:32.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0" in namespace "downward-api-9434" to be "success or failure" Jun 29 12:58:32.777: INFO: Pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680591ms Jun 29 12:58:34.927: INFO: Pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153021733s Jun 29 12:58:36.931: INFO: Pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156443243s Jun 29 12:58:38.935: INFO: Pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160562373s STEP: Saw pod success Jun 29 12:58:38.935: INFO: Pod "downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0" satisfied condition "success or failure" Jun 29 12:58:38.938: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0 container client-container: STEP: delete the pod Jun 29 12:58:38.995: INFO: Waiting for pod downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0 to disappear Jun 29 12:58:39.011: INFO: Pod downwardapi-volume-7ea43e95-a8b0-49fe-912b-d583d6cacff0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:58:39.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9434" for this suite. Jun 29 12:58:45.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:58:45.318: INFO: namespace downward-api-9434 deletion completed in 6.304322715s • [SLOW TEST:12.670 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:58:45.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 29 12:58:51.344: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:58:51.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1576" for this suite. Jun 29 12:58:57.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:58:57.853: INFO: namespace container-runtime-1576 deletion completed in 6.428767592s • [SLOW TEST:12.535 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:58:57.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 29 12:58:57.922: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix174224435/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:58:57.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7561" for this suite. Jun 29 12:59:04.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:59:04.092: INFO: namespace kubectl-7561 deletion completed in 6.102062862s • [SLOW TEST:6.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:59:04.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 29 12:59:04.147: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:59:04.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-567" for this suite. Jun 29 12:59:10.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:59:10.338: INFO: namespace kubectl-567 deletion completed in 6.097746974s • [SLOW TEST:6.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:59:10.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 12:59:10.439: INFO: Creating deployment "test-recreate-deployment" Jun 29 12:59:10.464: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 29 12:59:10.521: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 29 12:59:12.605: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 29 12:59:12.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032350, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032350, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 12:59:14.612: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 29 12:59:14.620: INFO: Updating deployment test-recreate-deployment Jun 29 12:59:14.620: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 29 12:59:14.886: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-971,SelfLink:/apis/apps/v1/namespaces/deployment-971/deployments/test-recreate-deployment,UID:53a1ef44-7347-47af-867b-76c790b772ee,ResourceVersion:19102640,Generation:2,CreationTimestamp:2020-06-29 12:59:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-29 12:59:14 +0000 UTC 2020-06-29 12:59:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-29 12:59:14 +0000 UTC 2020-06-29 12:59:10 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 29 12:59:14.906: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-971,SelfLink:/apis/apps/v1/namespaces/deployment-971/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5b121613-2dce-4a13-8db5-5b3c05f62479,ResourceVersion:19102638,Generation:1,CreationTimestamp:2020-06-29 12:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 53a1ef44-7347-47af-867b-76c790b772ee 0xc002a2cd27 0xc002a2cd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 12:59:14.906: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 29 12:59:14.906: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-971,SelfLink:/apis/apps/v1/namespaces/deployment-971/replicasets/test-recreate-deployment-6df85df6b9,UID:22125f3b-4768-4b5f-ab6f-4ddc81c06e8b,ResourceVersion:19102628,Generation:2,CreationTimestamp:2020-06-29 12:59:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 53a1ef44-7347-47af-867b-76c790b772ee 0xc002a2cdf7 0xc002a2cdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 12:59:14.909: INFO: Pod "test-recreate-deployment-5c8c9cc69d-6zr8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-6zr8k,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-971,SelfLink:/api/v1/namespaces/deployment-971/pods/test-recreate-deployment-5c8c9cc69d-6zr8k,UID:cde38944-35fb-46fc-8de7-8305e2859e8f,ResourceVersion:19102641,Generation:0,CreationTimestamp:2020-06-29 12:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5b121613-2dce-4a13-8db5-5b3c05f62479 0xc0025d33c7 0xc0025d33c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jppxb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jppxb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jppxb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025d3440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025d3460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 12:59:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 12:59:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 12:59:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 12:59:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 12:59:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:59:14.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-971" for this suite. Jun 29 12:59:24.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 12:59:25.008: INFO: namespace deployment-971 deletion completed in 10.095273223s • [SLOW TEST:14.670 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 12:59:25.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4377 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4377 STEP: Deleting pre-stop pod Jun 29 12:59:42.668: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 12:59:42.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4377" for this suite. Jun 29 13:00:20.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:00:20.919: INFO: namespace prestop-4377 deletion completed in 38.13741373s • [SLOW TEST:55.911 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:00:20.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 29 13:00:25.705: INFO: Successfully updated pod "annotationupdateace9d436-0ac1-45bd-80e3-7f12b2f8a306" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:00:27.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3610" for this suite. Jun 29 13:00:49.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:00:49.905: INFO: namespace downward-api-3610 deletion completed in 22.1172607s • [SLOW TEST:28.986 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:00:49.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 29 13:00:55.975: INFO: Pod pod-hostip-68db5913-ab7a-437b-bebe-ca6eeaf9d402 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:00:55.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5536" for this suite. Jun 29 13:01:17.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:01:18.064: INFO: namespace pods-5536 deletion completed in 22.08563586s • [SLOW TEST:28.159 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:01:18.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 29 13:01:18.201: INFO: Waiting up to 5m0s for pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c" in namespace "emptydir-9977" to be "success or failure" Jun 29 13:01:18.223: INFO: Pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.607081ms Jun 29 13:01:20.540: INFO: Pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338740981s Jun 29 13:01:22.544: INFO: Pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342764112s Jun 29 13:01:24.548: INFO: Pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.346490107s STEP: Saw pod success Jun 29 13:01:24.548: INFO: Pod "pod-6fc45277-0c90-417d-aff9-52818eb7662c" satisfied condition "success or failure" Jun 29 13:01:24.550: INFO: Trying to get logs from node iruya-worker pod pod-6fc45277-0c90-417d-aff9-52818eb7662c container test-container: STEP: delete the pod Jun 29 13:01:24.600: INFO: Waiting for pod pod-6fc45277-0c90-417d-aff9-52818eb7662c to disappear Jun 29 13:01:24.683: INFO: Pod pod-6fc45277-0c90-417d-aff9-52818eb7662c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:01:24.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9977" for this suite. Jun 29 13:01:30.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:01:30.827: INFO: namespace emptydir-9977 deletion completed in 6.139839224s • [SLOW TEST:12.763 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:01:30.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Jun 29 13:01:45.373: INFO: 5 pods remaining Jun 29 13:01:45.373: INFO: 5 pods has nil DeletionTimestamp Jun 29 13:01:45.373: INFO: STEP: Gathering metrics W0629 13:01:49.642617 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 13:01:49.642: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:01:49.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3230" for this suite. Jun 29 13:02:02.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:02:02.318: INFO: namespace gc-3230 deletion completed in 12.672678308s • [SLOW TEST:31.490 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:02:02.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 29 13:02:02.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1432' Jun 29 13:02:06.530: INFO: stderr: "" Jun 29 13:02:06.530: INFO: stdout: "pod/pause created\n" Jun 29 13:02:06.530: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 29 13:02:06.530: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1432" to be "running and ready" Jun 29 13:02:06.536: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051403ms Jun 29 13:02:08.539: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009259853s Jun 29 13:02:10.544: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013711625s Jun 29 13:02:10.544: INFO: Pod "pause" satisfied condition "running and ready" Jun 29 13:02:10.544: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 29 13:02:10.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1432' Jun 29 13:02:10.653: INFO: stderr: "" Jun 29 13:02:10.653: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 29 13:02:10.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1432' Jun 29 13:02:10.757: INFO: stderr: "" Jun 29 13:02:10.757: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 29 13:02:10.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1432' Jun 29 13:02:10.857: INFO: stderr: "" Jun 29 13:02:10.857: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 29 13:02:10.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1432' Jun 29 13:02:10.965: INFO: stderr: "" Jun 29 13:02:10.965: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 29 13:02:10.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1432' Jun 29 13:02:11.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 13:02:11.106: INFO: stdout: "pod \"pause\" force deleted\n" Jun 29 13:02:11.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1432' Jun 29 13:02:11.215: INFO: stderr: "No resources found.\n" Jun 29 13:02:11.215: INFO: stdout: "" Jun 29 13:02:11.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1432 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 13:02:11.310: INFO: stderr: "" Jun 29 13:02:11.310: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:02:11.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1432" for this suite. Jun 29 13:02:17.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:02:17.456: INFO: namespace kubectl-1432 deletion completed in 6.143050969s • [SLOW TEST:15.139 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:02:17.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-8e6f0895-0aee-47b7-a462-f63ad67e2b9e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-8e6f0895-0aee-47b7-a462-f63ad67e2b9e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:02:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6354" for this suite. Jun 29 13:02:45.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:02:45.752: INFO: namespace configmap-6354 deletion completed in 22.080576066s • [SLOW TEST:28.295 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:02:45.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:03:10.044: INFO: Container started at 2020-06-29 13:02:49 +0000 UTC, pod became ready at 2020-06-29 13:03:09 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:03:10.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8583" for this suite. Jun 29 13:03:32.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:03:32.139: INFO: namespace container-probe-8583 deletion completed in 22.091601097s • [SLOW TEST:46.387 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:03:32.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-0e4f022c-9d65-4e8d-89e7-e6f0c7d748e8 STEP: Creating a pod to test consume secrets Jun 29 13:03:32.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f" in namespace "projected-8831" to be "success or failure" Jun 29 13:03:32.275: INFO: Pod "pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739563ms Jun 29 13:03:34.280: INFO: Pod "pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098259s Jun 29 13:03:36.283: INFO: Pod "pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011606286s STEP: Saw pod success Jun 29 13:03:36.283: INFO: Pod "pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f" satisfied condition "success or failure" Jun 29 13:03:36.285: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f container projected-secret-volume-test: STEP: delete the pod Jun 29 13:03:36.460: INFO: Waiting for pod pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f to disappear Jun 29 13:03:36.631: INFO: Pod pod-projected-secrets-d63ec697-2e2b-4c12-ae3f-2daf265aae2f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:03:36.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8831" for this suite. Jun 29 13:03:42.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:03:42.766: INFO: namespace projected-8831 deletion completed in 6.129900513s • [SLOW TEST:10.626 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:03:42.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-470a625d-30d1-4a52-8f8d-681049bcad93 STEP: Creating a pod to test consume secrets Jun 29 13:03:42.915: INFO: Waiting up to 5m0s for pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d" in namespace "secrets-3918" to be "success or failure" Jun 29 13:03:42.929: INFO: Pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.61969ms Jun 29 13:03:44.933: INFO: Pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017910059s Jun 29 13:03:46.937: INFO: Pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d": Phase="Running", Reason="", readiness=true. Elapsed: 4.021878061s Jun 29 13:03:48.942: INFO: Pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026210431s STEP: Saw pod success Jun 29 13:03:48.942: INFO: Pod "pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d" satisfied condition "success or failure" Jun 29 13:03:48.945: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d container secret-volume-test: STEP: delete the pod Jun 29 13:03:48.965: INFO: Waiting for pod pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d to disappear Jun 29 13:03:48.986: INFO: Pod pod-secrets-48a83f69-b4cc-4600-b69d-a6c17f16f23d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:03:48.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3918" for this suite. Jun 29 13:03:55.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:03:55.095: INFO: namespace secrets-3918 deletion completed in 6.106293095s • [SLOW TEST:12.328 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:03:55.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-31311322-4b54-421a-b06e-a534379c446e STEP: Creating a pod to test consume secrets Jun 29 13:03:55.163: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9" in namespace "projected-5713" to be "success or failure" Jun 29 13:03:55.180: INFO: Pod "pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.741405ms Jun 29 13:03:57.279: INFO: Pod "pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115449616s Jun 29 13:03:59.283: INFO: Pod "pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119551246s STEP: Saw pod success Jun 29 13:03:59.283: INFO: Pod "pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9" satisfied condition "success or failure" Jun 29 13:03:59.286: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9 container projected-secret-volume-test: STEP: delete the pod Jun 29 13:03:59.306: INFO: Waiting for pod pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9 to disappear Jun 29 13:03:59.544: INFO: Pod pod-projected-secrets-c61d8f50-3655-4aec-90e9-44b0da6117d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:03:59.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5713" for this suite. Jun 29 13:04:05.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:04:05.687: INFO: namespace projected-5713 deletion completed in 6.137167249s • [SLOW TEST:10.590 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:04:05.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 29 13:04:13.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:13.840: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:15.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:15.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:17.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:17.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:19.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:19.845: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:21.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:21.845: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:23.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:23.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:25.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:25.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:27.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:27.843: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:29.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:29.843: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:31.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:31.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:33.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:33.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:35.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:35.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:37.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:37.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:39.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:39.845: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:41.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:41.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 29 13:04:43.840: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 29 13:04:43.843: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:04:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7271" for this suite. Jun 29 13:05:05.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:05:05.950: INFO: namespace container-lifecycle-hook-7271 deletion completed in 22.103444223s • [SLOW TEST:60.262 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:05:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 29 13:05:06.027: INFO: PodSpec: initContainers in spec.initContainers Jun 29 13:05:55.360: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b83939b2-d8d7-4c56-8258-762138f124a5", GenerateName:"", Namespace:"init-container-1483", SelfLink:"/api/v1/namespaces/init-container-1483/pods/pod-init-b83939b2-d8d7-4c56-8258-762138f124a5", UID:"8ec60c77-5d7e-428c-a503-1e5f1d8b3114", ResourceVersion:"19103965", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729032706, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"27119789"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mqtmp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023c9240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mqtmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mqtmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mqtmp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001451018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ac0f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014510a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014510c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0014510c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0014510cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729032706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.185", StartTime:(*v1.Time)(0xc0010d4fa0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0010d5040), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024fc310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://84fdffc4a79a6fcb7a40335b2e3c144c8a6a74c53389d4d4891610404c0e5e3e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010d5060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010d4fe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:05:55.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1483" for this suite. Jun 29 13:06:17.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:06:17.818: INFO: namespace init-container-1483 deletion completed in 22.243992873s • [SLOW TEST:71.869 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:06:17.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 29 13:06:17.902: INFO: Waiting up to 5m0s for pod "pod-22457008-33ab-4df5-a964-774df120f6db" in namespace "emptydir-4465" to be "success or failure" Jun 29 13:06:17.950: INFO: Pod "pod-22457008-33ab-4df5-a964-774df120f6db": Phase="Pending", Reason="", readiness=false. Elapsed: 48.167076ms Jun 29 13:06:19.954: INFO: Pod "pod-22457008-33ab-4df5-a964-774df120f6db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052139459s Jun 29 13:06:21.959: INFO: Pod "pod-22457008-33ab-4df5-a964-774df120f6db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056630028s STEP: Saw pod success Jun 29 13:06:21.959: INFO: Pod "pod-22457008-33ab-4df5-a964-774df120f6db" satisfied condition "success or failure" Jun 29 13:06:21.962: INFO: Trying to get logs from node iruya-worker pod pod-22457008-33ab-4df5-a964-774df120f6db container test-container: STEP: delete the pod Jun 29 13:06:21.980: INFO: Waiting for pod pod-22457008-33ab-4df5-a964-774df120f6db to disappear Jun 29 13:06:21.985: INFO: Pod pod-22457008-33ab-4df5-a964-774df120f6db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:06:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4465" for this suite. Jun 29 13:06:28.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:06:28.109: INFO: namespace emptydir-4465 deletion completed in 6.121208574s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:06:28.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:06:28.193: INFO: Creating ReplicaSet my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4 Jun 29 13:06:28.242: INFO: Pod name my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4: Found 0 pods out of 1 Jun 29 13:06:33.248: INFO: Pod name my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4: Found 1 pods out of 1 Jun 29 13:06:33.248: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4" is running Jun 29 13:06:33.251: INFO: Pod "my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4-995gr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:06:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:06:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:06:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:06:28 +0000 UTC Reason: Message:}]) Jun 29 13:06:33.251: INFO: Trying to dial the pod Jun 29 13:06:38.265: INFO: Controller my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4: Got expected result from replica 1 [my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4-995gr]: "my-hostname-basic-c2e83ecd-be55-40ce-a775-ec70c6edfea4-995gr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:06:38.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-93" for this suite. Jun 29 13:06:44.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:06:44.430: INFO: namespace replicaset-93 deletion completed in 6.160634915s • [SLOW TEST:16.320 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:06:44.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 13:06:44.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6194' Jun 29 13:06:44.644: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 29 13:06:44.644: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 29 13:06:44.670: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 29 13:06:44.691: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 29 13:06:44.767: INFO: scanned /root for discovery docs: Jun 29 13:06:44.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6194' Jun 29 13:07:00.711: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 29 13:07:00.711: INFO: stdout: "Created e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6\nScaling up e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 29 13:07:00.711: INFO: stdout: "Created e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6\nScaling up e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 29 13:07:00.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6194' Jun 29 13:07:00.881: INFO: stderr: "" Jun 29 13:07:00.881: INFO: stdout: "e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6-9dcgn e2e-test-nginx-rc-lc7sn " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jun 29 13:07:05.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6194' Jun 29 13:07:05.986: INFO: stderr: "" Jun 29 13:07:05.986: INFO: stdout: "e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6-9dcgn " Jun 29 13:07:05.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6-9dcgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6194' Jun 29 13:07:06.078: INFO: stderr: "" Jun 29 13:07:06.078: INFO: stdout: "true" Jun 29 13:07:06.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6-9dcgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6194' Jun 29 13:07:06.177: INFO: stderr: "" Jun 29 13:07:06.177: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 29 13:07:06.177: INFO: e2e-test-nginx-rc-aee3e37d63aa852d3e53c4852fb767e6-9dcgn is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 29 13:07:06.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6194' Jun 29 13:07:06.285: INFO: stderr: "" Jun 29 13:07:06.285: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:07:06.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6194" for this suite. Jun 29 13:07:26.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:07:26.400: INFO: namespace kubectl-6194 deletion completed in 20.112542605s • [SLOW TEST:41.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:07:26.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 29 13:07:26.527: INFO: Waiting up to 5m0s for pod "pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd" in namespace "emptydir-2308" to be "success or failure" Jun 29 13:07:26.538: INFO: Pod "pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.47367ms Jun 29 13:07:28.542: INFO: Pod "pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015247057s Jun 29 13:07:30.546: INFO: Pod "pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018856029s STEP: Saw pod success Jun 29 13:07:30.546: INFO: Pod "pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd" satisfied condition "success or failure" Jun 29 13:07:30.548: INFO: Trying to get logs from node iruya-worker pod pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd container test-container: STEP: delete the pod Jun 29 13:07:30.568: INFO: Waiting for pod pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd to disappear Jun 29 13:07:30.573: INFO: Pod pod-1ccf1a10-04dd-4f29-aebd-730e78549fdd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:07:30.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2308" for this suite. Jun 29 13:07:36.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:07:36.690: INFO: namespace emptydir-2308 deletion completed in 6.114959359s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:07:36.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 13:07:36.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7690' Jun 29 13:07:36.846: INFO: stderr: "" Jun 29 13:07:36.846: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 29 13:07:41.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7690 -o json' Jun 29 13:07:41.989: INFO: stderr: "" Jun 29 13:07:41.989: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-29T13:07:36Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7690\",\n \"resourceVersion\": \"19104353\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7690/pods/e2e-test-nginx-pod\",\n \"uid\": \"d0951d45-e57f-4a48-8685-74b7185908bb\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-txnvt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-txnvt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-txnvt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-29T13:07:36Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-29T13:07:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-29T13:07:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-29T13:07:36Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d1a9f0f591ab61632489d16c05819f30354fa2a2dde5b24b1bc458057b6eb739\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-29T13:07:39Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.189\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-29T13:07:36Z\"\n }\n}\n" STEP: replace the image in the pod Jun 29 13:07:41.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7690' Jun 29 13:07:42.299: INFO: stderr: "" Jun 29 13:07:42.299: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 29 13:07:42.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7690' Jun 29 13:07:46.613: INFO: stderr: "" Jun 29 13:07:46.613: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:07:46.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7690" for this suite. Jun 29 13:07:52.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:07:52.811: INFO: namespace kubectl-7690 deletion completed in 6.194844823s • [SLOW TEST:16.120 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:07:52.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1850/configmap-test-f58727e3-d925-4b3d-b4c0-e6733fd72828 STEP: Creating a pod to test consume configMaps Jun 29 13:07:52.894: INFO: Waiting up to 5m0s for pod "pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6" in namespace "configmap-1850" to be "success or failure" Jun 29 13:07:52.934: INFO: Pod "pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.234956ms Jun 29 13:07:55.030: INFO: Pod "pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136769386s Jun 29 13:07:57.034: INFO: Pod "pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140690991s STEP: Saw pod success Jun 29 13:07:57.034: INFO: Pod "pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6" satisfied condition "success or failure" Jun 29 13:07:57.038: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6 container env-test: STEP: delete the pod Jun 29 13:07:57.086: INFO: Waiting for pod pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6 to disappear Jun 29 13:07:57.167: INFO: Pod pod-configmaps-581531a1-28ae-412c-9795-14b7ac19ced6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:07:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1850" for this suite. Jun 29 13:08:03.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:08:03.299: INFO: namespace configmap-1850 deletion completed in 6.128432874s • [SLOW TEST:10.488 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:08:03.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8 Jun 29 13:08:03.418: INFO: Pod name my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8: Found 0 pods out of 1 Jun 29 13:08:08.423: INFO: Pod name my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8: Found 1 pods out of 1 Jun 29 13:08:08.423: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8" are running Jun 29 13:08:08.426: INFO: Pod "my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8-2hww2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:08:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:08:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:08:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-29 13:08:03 +0000 UTC Reason: Message:}]) Jun 29 13:08:08.426: INFO: Trying to dial the pod Jun 29 13:08:13.438: INFO: Controller my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8: Got expected result from replica 1 [my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8-2hww2]: "my-hostname-basic-68945f16-72ea-4e84-87f2-b21752205ea8-2hww2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:08:13.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7628" for this suite. Jun 29 13:08:19.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:08:19.576: INFO: namespace replication-controller-7628 deletion completed in 6.133980824s • [SLOW TEST:16.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:08:19.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:08:19.660: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 29 13:08:21.797: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:08:22.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2900" for this suite. Jun 29 13:08:29.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:08:29.311: INFO: namespace replication-controller-2900 deletion completed in 6.415553212s • [SLOW TEST:9.733 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:08:29.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 29 13:08:29.398: INFO: namespace kubectl-5179 Jun 29 13:08:29.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5179' Jun 29 13:08:29.667: INFO: stderr: "" Jun 29 13:08:29.667: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 29 13:08:30.671: INFO: Selector matched 1 pods for map[app:redis] Jun 29 13:08:30.671: INFO: Found 0 / 1 Jun 29 13:08:31.713: INFO: Selector matched 1 pods for map[app:redis] Jun 29 13:08:31.713: INFO: Found 0 / 1 Jun 29 13:08:32.675: INFO: Selector matched 1 pods for map[app:redis] Jun 29 13:08:32.675: INFO: Found 0 / 1 Jun 29 13:08:33.672: INFO: Selector matched 1 pods for map[app:redis] Jun 29 13:08:33.672: INFO: Found 1 / 1 Jun 29 13:08:33.672: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 29 13:08:33.676: INFO: Selector matched 1 pods for map[app:redis] Jun 29 13:08:33.676: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 29 13:08:33.676: INFO: wait on redis-master startup in kubectl-5179 Jun 29 13:08:33.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t49gl redis-master --namespace=kubectl-5179' Jun 29 13:08:33.786: INFO: stderr: "" Jun 29 13:08:33.786: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Jun 13:08:32.689 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jun 13:08:32.693 # Server started, Redis version 3.2.12\n1:M 29 Jun 13:08:32.693 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jun 13:08:32.693 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 29 13:08:33.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5179' Jun 29 13:08:33.925: INFO: stderr: "" Jun 29 13:08:33.926: INFO: stdout: "service/rm2 exposed\n" Jun 29 13:08:33.946: INFO: Service rm2 in namespace kubectl-5179 found. STEP: exposing service Jun 29 13:08:35.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5179' Jun 29 13:08:36.099: INFO: stderr: "" Jun 29 13:08:36.099: INFO: stdout: "service/rm3 exposed\n" Jun 29 13:08:36.102: INFO: Service rm3 in namespace kubectl-5179 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:08:38.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5179" for this suite. Jun 29 13:09:00.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:09:00.272: INFO: namespace kubectl-5179 deletion completed in 22.158154502s • [SLOW TEST:30.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:09:00.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 29 13:09:04.895: INFO: Successfully updated pod "pod-update-4e47296d-7d99-4092-b375-8bba4d0c33ab" STEP: verifying the updated pod is in kubernetes Jun 29 13:09:04.908: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:09:04.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2887" for this suite. Jun 29 13:09:26.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:09:26.995: INFO: namespace pods-2887 deletion completed in 22.083544058s • [SLOW TEST:26.722 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:09:26.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 29 13:09:27.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1036' Jun 29 13:09:27.372: INFO: stderr: "" Jun 29 13:09:27.372: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 13:09:27.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:27.515: INFO: stderr: "" Jun 29 13:09:27.515: INFO: stdout: "update-demo-nautilus-gw5bw update-demo-nautilus-z4wfn " Jun 29 13:09:27.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw5bw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:27.592: INFO: stderr: "" Jun 29 13:09:27.592: INFO: stdout: "" Jun 29 13:09:27.592: INFO: update-demo-nautilus-gw5bw is created but not running Jun 29 13:09:32.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:32.685: INFO: stderr: "" Jun 29 13:09:32.685: INFO: stdout: "update-demo-nautilus-gw5bw update-demo-nautilus-z4wfn " Jun 29 13:09:32.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw5bw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:32.779: INFO: stderr: "" Jun 29 13:09:32.779: INFO: stdout: "true" Jun 29 13:09:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gw5bw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:32.878: INFO: stderr: "" Jun 29 13:09:32.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:09:32.878: INFO: validating pod update-demo-nautilus-gw5bw Jun 29 13:09:32.909: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:09:32.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:09:32.909: INFO: update-demo-nautilus-gw5bw is verified up and running Jun 29 13:09:32.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:33.012: INFO: stderr: "" Jun 29 13:09:33.012: INFO: stdout: "true" Jun 29 13:09:33.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:33.110: INFO: stderr: "" Jun 29 13:09:33.110: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:09:33.110: INFO: validating pod update-demo-nautilus-z4wfn Jun 29 13:09:33.126: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:09:33.126: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:09:33.126: INFO: update-demo-nautilus-z4wfn is verified up and running STEP: scaling down the replication controller Jun 29 13:09:33.128: INFO: scanned /root for discovery docs: Jun 29 13:09:33.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1036' Jun 29 13:09:34.247: INFO: stderr: "" Jun 29 13:09:34.247: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 13:09:34.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:34.345: INFO: stderr: "" Jun 29 13:09:34.345: INFO: stdout: "update-demo-nautilus-gw5bw update-demo-nautilus-z4wfn " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 29 13:09:39.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:39.442: INFO: stderr: "" Jun 29 13:09:39.442: INFO: stdout: "update-demo-nautilus-gw5bw update-demo-nautilus-z4wfn " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 29 13:09:44.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:44.544: INFO: stderr: "" Jun 29 13:09:44.544: INFO: stdout: "update-demo-nautilus-z4wfn " Jun 29 13:09:44.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:44.633: INFO: stderr: "" Jun 29 13:09:44.633: INFO: stdout: "true" Jun 29 13:09:44.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:44.740: INFO: stderr: "" Jun 29 13:09:44.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:09:44.740: INFO: validating pod update-demo-nautilus-z4wfn Jun 29 13:09:44.744: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:09:44.744: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:09:44.744: INFO: update-demo-nautilus-z4wfn is verified up and running STEP: scaling up the replication controller Jun 29 13:09:44.747: INFO: scanned /root for discovery docs: Jun 29 13:09:44.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1036' Jun 29 13:09:45.904: INFO: stderr: "" Jun 29 13:09:45.904: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 13:09:45.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:45.995: INFO: stderr: "" Jun 29 13:09:45.995: INFO: stdout: "update-demo-nautilus-v9zh2 update-demo-nautilus-z4wfn " Jun 29 13:09:45.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9zh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:46.081: INFO: stderr: "" Jun 29 13:09:46.081: INFO: stdout: "" Jun 29 13:09:46.081: INFO: update-demo-nautilus-v9zh2 is created but not running Jun 29 13:09:51.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1036' Jun 29 13:09:51.183: INFO: stderr: "" Jun 29 13:09:51.183: INFO: stdout: "update-demo-nautilus-v9zh2 update-demo-nautilus-z4wfn " Jun 29 13:09:51.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9zh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:51.270: INFO: stderr: "" Jun 29 13:09:51.270: INFO: stdout: "true" Jun 29 13:09:51.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9zh2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:51.365: INFO: stderr: "" Jun 29 13:09:51.365: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:09:51.365: INFO: validating pod update-demo-nautilus-v9zh2 Jun 29 13:09:51.369: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:09:51.369: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:09:51.369: INFO: update-demo-nautilus-v9zh2 is verified up and running Jun 29 13:09:51.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:51.471: INFO: stderr: "" Jun 29 13:09:51.471: INFO: stdout: "true" Jun 29 13:09:51.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4wfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1036' Jun 29 13:09:51.559: INFO: stderr: "" Jun 29 13:09:51.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:09:51.559: INFO: validating pod update-demo-nautilus-z4wfn Jun 29 13:09:51.561: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:09:51.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:09:51.562: INFO: update-demo-nautilus-z4wfn is verified up and running STEP: using delete to clean up resources Jun 29 13:09:51.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1036' Jun 29 13:09:51.660: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 13:09:51.660: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 29 13:09:51.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1036' Jun 29 13:09:51.754: INFO: stderr: "No resources found.\n" Jun 29 13:09:51.755: INFO: stdout: "" Jun 29 13:09:51.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1036 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 13:09:51.846: INFO: stderr: "" Jun 29 13:09:51.846: INFO: stdout: "update-demo-nautilus-v9zh2\nupdate-demo-nautilus-z4wfn\n" Jun 29 13:09:52.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1036' Jun 29 13:09:52.467: INFO: stderr: "No resources found.\n" Jun 29 13:09:52.467: INFO: stdout: "" Jun 29 13:09:52.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1036 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 13:09:52.638: INFO: stderr: "" Jun 29 13:09:52.638: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:09:52.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1036" for this suite. Jun 29 13:10:14.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:10:14.824: INFO: namespace kubectl-1036 deletion completed in 22.182603178s • [SLOW TEST:47.829 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:10:14.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-aa71c256-bb4e-41cf-9a3f-dac3bc69d02b STEP: Creating a pod to test consume configMaps Jun 29 13:10:14.932: INFO: Waiting up to 5m0s for pod "pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834" in namespace "configmap-2354" to be "success or failure" Jun 29 13:10:14.936: INFO: Pod "pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4331ms Jun 29 13:10:16.943: INFO: Pod "pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010530946s Jun 29 13:10:18.947: INFO: Pod "pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014368207s STEP: Saw pod success Jun 29 13:10:18.947: INFO: Pod "pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834" satisfied condition "success or failure" Jun 29 13:10:18.950: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834 container configmap-volume-test: STEP: delete the pod Jun 29 13:10:18.968: INFO: Waiting for pod pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834 to disappear Jun 29 13:10:18.993: INFO: Pod pod-configmaps-2011ddbc-fc92-4174-a186-9f0f26855834 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:10:18.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2354" for this suite. Jun 29 13:10:25.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:10:25.108: INFO: namespace configmap-2354 deletion completed in 6.111166531s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:10:25.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 13:10:25.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7942' Jun 29 13:10:25.266: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 29 13:10:25.266: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 29 13:10:25.321: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4bd7v] Jun 29 13:10:25.321: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4bd7v" in namespace "kubectl-7942" to be "running and ready" Jun 29 13:10:25.339: INFO: Pod "e2e-test-nginx-rc-4bd7v": Phase="Pending", Reason="", readiness=false. Elapsed: 17.294493ms Jun 29 13:10:27.343: INFO: Pod "e2e-test-nginx-rc-4bd7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02176571s Jun 29 13:10:29.348: INFO: Pod "e2e-test-nginx-rc-4bd7v": Phase="Running", Reason="", readiness=true. Elapsed: 4.026331205s Jun 29 13:10:29.348: INFO: Pod "e2e-test-nginx-rc-4bd7v" satisfied condition "running and ready" Jun 29 13:10:29.348: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4bd7v] Jun 29 13:10:29.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7942' Jun 29 13:10:29.458: INFO: stderr: "" Jun 29 13:10:29.458: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 29 13:10:29.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7942' Jun 29 13:10:29.599: INFO: stderr: "" Jun 29 13:10:29.599: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:10:29.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7942" for this suite. Jun 29 13:10:51.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:10:51.696: INFO: namespace kubectl-7942 deletion completed in 22.092469985s • [SLOW TEST:26.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:10:51.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 29 13:10:51.812: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:11:02.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4555" for this suite. Jun 29 13:11:08.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:11:08.263: INFO: namespace pods-4555 deletion completed in 6.089185006s • [SLOW TEST:16.566 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:11:08.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6441 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6441 to expose endpoints map[] Jun 29 13:11:08.423: INFO: Get endpoints failed (25.589482ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 29 13:11:09.426: INFO: successfully validated that service endpoint-test2 in namespace services-6441 exposes endpoints map[] (1.028781524s elapsed) STEP: Creating pod pod1 in namespace services-6441 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6441 to expose endpoints map[pod1:[80]] Jun 29 13:11:12.472: INFO: successfully validated that service endpoint-test2 in namespace services-6441 exposes endpoints map[pod1:[80]] (3.040683033s elapsed) STEP: Creating pod pod2 in namespace services-6441 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6441 to expose endpoints map[pod1:[80] pod2:[80]] Jun 29 13:11:15.713: INFO: successfully validated that service endpoint-test2 in namespace services-6441 exposes endpoints map[pod1:[80] pod2:[80]] (3.23789882s elapsed) STEP: Deleting pod pod1 in namespace services-6441 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6441 to expose endpoints map[pod2:[80]] Jun 29 13:11:16.760: INFO: successfully validated that service endpoint-test2 in namespace services-6441 exposes endpoints map[pod2:[80]] (1.042602078s elapsed) STEP: Deleting pod pod2 in namespace services-6441 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6441 to expose endpoints map[] Jun 29 13:11:17.775: INFO: successfully validated that service endpoint-test2 in namespace services-6441 exposes endpoints map[] (1.010643048s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:11:17.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6441" for this suite. Jun 29 13:11:24.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:11:24.125: INFO: namespace services-6441 deletion completed in 6.110167197s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.862 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:11:24.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:11:30.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1709" for this suite. Jun 29 13:11:36.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:11:36.714: INFO: namespace namespaces-1709 deletion completed in 6.099156315s STEP: Destroying namespace "nsdeletetest-9011" for this suite. Jun 29 13:11:36.716: INFO: Namespace nsdeletetest-9011 was already deleted STEP: Destroying namespace "nsdeletetest-5966" for this suite. Jun 29 13:11:42.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:11:42.836: INFO: namespace nsdeletetest-5966 deletion completed in 6.12013897s • [SLOW TEST:18.710 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:11:42.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 29 13:11:47.464: INFO: Successfully updated pod "annotationupdatef224df72-a2c8-42a7-a6ee-3437ae75f79d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:11:49.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3106" for this suite. Jun 29 13:12:11.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:12:12.158: INFO: namespace projected-3106 deletion completed in 22.656033635s • [SLOW TEST:29.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:12:12.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-84a2fc33-d66d-43c7-8fec-f2ca1b757005 in namespace container-probe-3792 Jun 29 13:12:16.782: INFO: Started pod busybox-84a2fc33-d66d-43c7-8fec-f2ca1b757005 in namespace container-probe-3792 STEP: checking the pod's current state and verifying that restartCount is present Jun 29 13:12:16.785: INFO: Initial restart count of pod busybox-84a2fc33-d66d-43c7-8fec-f2ca1b757005 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:16:17.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3792" for this suite. Jun 29 13:16:23.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:16:23.484: INFO: namespace container-probe-3792 deletion completed in 6.094989204s • [SLOW TEST:251.326 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:16:23.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ce3e0157-4a67-4e09-99b4-2127610b4eb5 STEP: Creating a pod to test consume secrets Jun 29 13:16:23.627: INFO: Waiting up to 5m0s for pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8" in namespace "secrets-822" to be "success or failure" Jun 29 13:16:23.648: INFO: Pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.223985ms Jun 29 13:16:25.651: INFO: Pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023999787s Jun 29 13:16:27.655: INFO: Pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.027582622s Jun 29 13:16:29.659: INFO: Pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031717338s STEP: Saw pod success Jun 29 13:16:29.659: INFO: Pod "pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8" satisfied condition "success or failure" Jun 29 13:16:29.662: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8 container secret-volume-test: STEP: delete the pod Jun 29 13:16:29.679: INFO: Waiting for pod pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8 to disappear Jun 29 13:16:29.684: INFO: Pod pod-secrets-620545be-bae9-4687-81c1-a6d9d8e382b8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:16:29.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-822" for this suite. Jun 29 13:16:35.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:16:35.764: INFO: namespace secrets-822 deletion completed in 6.077731114s • [SLOW TEST:12.280 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:16:35.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:16:35.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8965" for this suite. Jun 29 13:16:58.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:16:58.078: INFO: namespace pods-8965 deletion completed in 22.168595628s • [SLOW TEST:22.313 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:16:58.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:16:58.349: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.096691ms) Jun 29 13:16:58.352: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.637461ms) Jun 29 13:16:58.354: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.707437ms) Jun 29 13:16:58.357: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.756856ms) Jun 29 13:16:58.360: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.333154ms) Jun 29 13:16:58.362: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.681803ms) Jun 29 13:16:58.365: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.771785ms) Jun 29 13:16:58.368: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.488022ms) Jun 29 13:16:58.371: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.021209ms) Jun 29 13:16:58.373: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.563096ms) Jun 29 13:16:58.376: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.272228ms) Jun 29 13:16:58.378: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.530303ms) Jun 29 13:16:58.381: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.107352ms) Jun 29 13:16:58.384: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.595958ms) Jun 29 13:16:58.387: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.724084ms) Jun 29 13:16:58.390: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.342798ms) Jun 29 13:16:58.394: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.134309ms) Jun 29 13:16:58.397: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.245373ms) Jun 29 13:16:58.401: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.82899ms) Jun 29 13:16:58.405: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.169746ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:16:58.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2848" for this suite. Jun 29 13:17:04.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:17:04.570: INFO: namespace proxy-2848 deletion completed in 6.162596562s • [SLOW TEST:6.492 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:17:04.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 29 13:17:08.693: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1b482f1d-d0bb-41d7-b23d-41863c0dcf16,GenerateName:,Namespace:events-5550,SelfLink:/api/v1/namespaces/events-5550/pods/send-events-1b482f1d-d0bb-41d7-b23d-41863c0dcf16,UID:4e5312c9-325b-4bf1-89a5-2763021d1430,ResourceVersion:19106043,Generation:0,CreationTimestamp:2020-06-29 13:17:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 606433679,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9r6g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9r6g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c9r6g true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00293ca20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00293ca40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:17:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:17:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:17:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:17:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.13,StartTime:2020-06-29 13:17:04 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-29 13:17:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f3c19f5a0223d85a2b5b632657f7a3a866aacaf12bb0c3676ef490e92e3cbd98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 29 13:17:10.698: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 29 13:17:12.702: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:17:12.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5550" for this suite. Jun 29 13:17:58.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:17:58.845: INFO: namespace events-5550 deletion completed in 46.129628798s • [SLOW TEST:54.275 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:17:58.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:17:58.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34" in namespace "downward-api-9178" to be "success or failure" Jun 29 13:17:58.978: INFO: Pod "downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34": Phase="Pending", Reason="", readiness=false. Elapsed: 20.449676ms Jun 29 13:18:00.983: INFO: Pod "downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025392871s Jun 29 13:18:02.986: INFO: Pod "downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02862836s STEP: Saw pod success Jun 29 13:18:02.986: INFO: Pod "downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34" satisfied condition "success or failure" Jun 29 13:18:02.988: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34 container client-container: STEP: delete the pod Jun 29 13:18:03.028: INFO: Waiting for pod downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34 to disappear Jun 29 13:18:03.043: INFO: Pod downwardapi-volume-3d552ebb-85f3-46f8-8e8b-1cc916890e34 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:18:03.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9178" for this suite. Jun 29 13:18:09.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:18:09.142: INFO: namespace downward-api-9178 deletion completed in 6.09541502s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:18:09.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 29 13:18:17.271: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 29 13:18:17.279: INFO: Pod pod-with-poststart-http-hook still exists Jun 29 13:18:19.279: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 29 13:18:19.282: INFO: Pod pod-with-poststart-http-hook still exists Jun 29 13:18:21.279: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 29 13:18:21.283: INFO: Pod pod-with-poststart-http-hook still exists Jun 29 13:18:23.279: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 29 13:18:23.284: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:18:23.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-538" for this suite. Jun 29 13:18:47.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:18:47.420: INFO: namespace container-lifecycle-hook-538 deletion completed in 24.132319727s • [SLOW TEST:38.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:18:47.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 29 13:18:47.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 29 13:18:52.950: INFO: stderr: "" Jun 29 13:18:52.950: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:18:52.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5958" for this suite. Jun 29 13:18:58.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:18:59.039: INFO: namespace kubectl-5958 deletion completed in 6.085849689s • [SLOW TEST:11.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:18:59.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 29 13:18:59.126: INFO: Waiting up to 5m0s for pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f" in namespace "var-expansion-476" to be "success or failure" Jun 29 13:18:59.134: INFO: Pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.263986ms Jun 29 13:19:01.138: INFO: Pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011808606s Jun 29 13:19:03.142: INFO: Pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f": Phase="Running", Reason="", readiness=true. Elapsed: 4.016020137s Jun 29 13:19:05.147: INFO: Pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020797892s STEP: Saw pod success Jun 29 13:19:05.147: INFO: Pod "var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f" satisfied condition "success or failure" Jun 29 13:19:05.150: INFO: Trying to get logs from node iruya-worker pod var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f container dapi-container: STEP: delete the pod Jun 29 13:19:05.186: INFO: Waiting for pod var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f to disappear Jun 29 13:19:05.190: INFO: Pod var-expansion-7c9584c5-3727-4538-b214-9f3f296b732f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:19:05.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-476" for this suite. Jun 29 13:19:11.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:11.284: INFO: namespace var-expansion-476 deletion completed in 6.091524501s • [SLOW TEST:12.245 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:19:11.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 29 13:19:15.438: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:19:15.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7608" for this suite. Jun 29 13:19:21.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:21.691: INFO: namespace container-runtime-7608 deletion completed in 6.114212525s • [SLOW TEST:10.406 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:19:21.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 29 13:19:21.809: INFO: Waiting up to 5m0s for pod "pod-c1baed47-6d76-43a9-9cf2-d24073c789ad" in namespace "emptydir-8940" to be "success or failure" Jun 29 13:19:21.817: INFO: Pod "pod-c1baed47-6d76-43a9-9cf2-d24073c789ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.742596ms Jun 29 13:19:23.837: INFO: Pod "pod-c1baed47-6d76-43a9-9cf2-d24073c789ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028610903s Jun 29 13:19:25.842: INFO: Pod "pod-c1baed47-6d76-43a9-9cf2-d24073c789ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033353613s STEP: Saw pod success Jun 29 13:19:25.842: INFO: Pod "pod-c1baed47-6d76-43a9-9cf2-d24073c789ad" satisfied condition "success or failure" Jun 29 13:19:25.845: INFO: Trying to get logs from node iruya-worker2 pod pod-c1baed47-6d76-43a9-9cf2-d24073c789ad container test-container: STEP: delete the pod Jun 29 13:19:25.881: INFO: Waiting for pod pod-c1baed47-6d76-43a9-9cf2-d24073c789ad to disappear Jun 29 13:19:25.895: INFO: Pod pod-c1baed47-6d76-43a9-9cf2-d24073c789ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:19:25.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8940" for this suite. Jun 29 13:19:31.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:32.118: INFO: namespace emptydir-8940 deletion completed in 6.220214413s • [SLOW TEST:10.426 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:19:32.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a01c558a-a59b-41e8-bfb0-f7095e17d497 STEP: Creating a pod to test consume secrets Jun 29 13:19:32.281: INFO: Waiting up to 5m0s for pod "pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4" in namespace "secrets-2016" to be "success or failure" Jun 29 13:19:32.284: INFO: Pod "pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.646225ms Jun 29 13:19:34.369: INFO: Pod "pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088895809s Jun 29 13:19:36.374: INFO: Pod "pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093369358s STEP: Saw pod success Jun 29 13:19:36.374: INFO: Pod "pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4" satisfied condition "success or failure" Jun 29 13:19:36.377: INFO: Trying to get logs from node iruya-worker pod pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4 container secret-volume-test: STEP: delete the pod Jun 29 13:19:36.394: INFO: Waiting for pod pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4 to disappear Jun 29 13:19:36.400: INFO: Pod pod-secrets-17b8292e-7cff-4c3a-87d4-ab604d85d8e4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:19:36.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2016" for this suite. Jun 29 13:19:42.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:42.518: INFO: namespace secrets-2016 deletion completed in 6.115754722s STEP: Destroying namespace "secret-namespace-4577" for this suite. Jun 29 13:19:48.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:48.609: INFO: namespace secret-namespace-4577 deletion completed in 6.090875685s • [SLOW TEST:16.490 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:19:48.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5051a481-0106-489e-8d64-50b670c87841 STEP: Creating a pod to test consume configMaps Jun 29 13:19:48.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15" in namespace "projected-1298" to be "success or failure" Jun 29 13:19:48.717: INFO: Pod "pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15": Phase="Pending", Reason="", readiness=false. Elapsed: 40.041742ms Jun 29 13:19:50.721: INFO: Pod "pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044063789s Jun 29 13:19:52.726: INFO: Pod "pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048665814s STEP: Saw pod success Jun 29 13:19:52.726: INFO: Pod "pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15" satisfied condition "success or failure" Jun 29 13:19:52.730: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15 container projected-configmap-volume-test: STEP: delete the pod Jun 29 13:19:52.797: INFO: Waiting for pod pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15 to disappear Jun 29 13:19:52.801: INFO: Pod pod-projected-configmaps-8e1fdd40-3112-462f-bd69-c22d846dba15 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:19:52.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1298" for this suite. Jun 29 13:19:58.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:19:58.898: INFO: namespace projected-1298 deletion completed in 6.093497586s • [SLOW TEST:10.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:19:58.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:20:04.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4642" for this suite. Jun 29 13:20:26.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:20:26.112: INFO: namespace replication-controller-4642 deletion completed in 22.092974881s • [SLOW TEST:27.213 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:20:26.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:20:30.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6043" for this suite. Jun 29 13:21:16.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:21:16.290: INFO: namespace kubelet-test-6043 deletion completed in 46.094527615s • [SLOW TEST:50.177 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:21:16.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5f7f60e4-b898-4622-8516-870546968cde STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5f7f60e4-b898-4622-8516-870546968cde STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:22:26.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5253" for this suite. Jun 29 13:22:48.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:22:48.843: INFO: namespace projected-5253 deletion completed in 22.09745785s • [SLOW TEST:92.552 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:22:48.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 29 13:22:48.944: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 29 13:22:53.948: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:22:54.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9709" for this suite. Jun 29 13:23:01.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:23:01.189: INFO: namespace replication-controller-9709 deletion completed in 6.221468728s • [SLOW TEST:12.346 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:23:01.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-06448078-7bf0-430d-8a65-d7c1ba70d488 STEP: Creating secret with name s-test-opt-upd-3b88a074-d890-4c06-bebd-acc921135a5a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-06448078-7bf0-430d-8a65-d7c1ba70d488 STEP: Updating secret s-test-opt-upd-3b88a074-d890-4c06-bebd-acc921135a5a STEP: Creating secret with name s-test-opt-create-9c5a9caa-b737-47e0-aa54-0ba9133a8f97 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:24:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8382" for this suite. Jun 29 13:24:51.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:24:52.009: INFO: namespace secrets-8382 deletion completed in 22.141029046s • [SLOW TEST:110.819 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:24:52.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f89a980e-27de-4032-9c6e-4482933c9217 STEP: Creating configMap with name cm-test-opt-upd-d2998006-e005-4962-9ab7-072ed941ddae STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f89a980e-27de-4032-9c6e-4482933c9217 STEP: Updating configmap cm-test-opt-upd-d2998006-e005-4962-9ab7-072ed941ddae STEP: Creating configMap with name cm-test-opt-create-558ad7e9-fbfa-4ba8-af5e-fec27386013e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:25:00.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8933" for this suite. Jun 29 13:25:22.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:25:22.361: INFO: namespace configmap-8933 deletion completed in 22.08073385s • [SLOW TEST:30.352 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:25:22.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:25:22.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2" in namespace "projected-7387" to be "success or failure" Jun 29 13:25:22.484: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.334798ms Jun 29 13:25:24.728: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302623314s Jun 29 13:25:26.733: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306955602s Jun 29 13:25:28.737: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311198319s Jun 29 13:25:30.740: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Running", Reason="", readiness=true. Elapsed: 8.314558736s Jun 29 13:25:32.745: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.3186671s STEP: Saw pod success Jun 29 13:25:32.745: INFO: Pod "downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2" satisfied condition "success or failure" Jun 29 13:25:32.748: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2 container client-container: STEP: delete the pod Jun 29 13:25:32.767: INFO: Waiting for pod downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2 to disappear Jun 29 13:25:32.771: INFO: Pod downwardapi-volume-1ce3a2f5-6b30-4bc1-91d3-27eee09f7dc2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:25:32.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7387" for this suite. Jun 29 13:25:38.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:25:38.862: INFO: namespace projected-7387 deletion completed in 6.087979704s • [SLOW TEST:16.501 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:25:38.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 29 13:25:43.487: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ccb48d72-ed66-4c48-bfab-e222b159b9f9" Jun 29 13:25:43.487: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ccb48d72-ed66-4c48-bfab-e222b159b9f9" in namespace "pods-142" to be "terminated due to deadline exceeded" Jun 29 13:25:43.509: INFO: Pod "pod-update-activedeadlineseconds-ccb48d72-ed66-4c48-bfab-e222b159b9f9": Phase="Running", Reason="", readiness=true. Elapsed: 21.588851ms Jun 29 13:25:45.513: INFO: Pod "pod-update-activedeadlineseconds-ccb48d72-ed66-4c48-bfab-e222b159b9f9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025037954s Jun 29 13:25:45.513: INFO: Pod "pod-update-activedeadlineseconds-ccb48d72-ed66-4c48-bfab-e222b159b9f9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:25:45.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-142" for this suite. Jun 29 13:25:51.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:25:51.624: INFO: namespace pods-142 deletion completed in 6.107392279s • [SLOW TEST:12.761 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:25:51.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 29 13:25:51.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-resource-version,UID:425b85a6-69f0-4058-96c2-43c30eaa3eb6,ResourceVersion:19107556,Generation:0,CreationTimestamp:2020-06-29 13:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 29 13:25:51.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-resource-version,UID:425b85a6-69f0-4058-96c2-43c30eaa3eb6,ResourceVersion:19107557,Generation:0,CreationTimestamp:2020-06-29 13:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:25:51.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5702" for this suite. Jun 29 13:25:57.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:25:57.880: INFO: namespace watch-5702 deletion completed in 6.102096388s • [SLOW TEST:6.257 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:25:57.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 29 13:25:58.030: INFO: Waiting up to 5m0s for pod "var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79" in namespace "var-expansion-9719" to be "success or failure" Jun 29 13:25:58.035: INFO: Pod "var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266098ms Jun 29 13:26:00.039: INFO: Pod "var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008441095s Jun 29 13:26:02.043: INFO: Pod "var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012853673s STEP: Saw pod success Jun 29 13:26:02.043: INFO: Pod "var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79" satisfied condition "success or failure" Jun 29 13:26:02.046: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79 container dapi-container: STEP: delete the pod Jun 29 13:26:02.168: INFO: Waiting for pod var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79 to disappear Jun 29 13:26:02.334: INFO: Pod var-expansion-db88d689-897e-4a4f-bbba-08ec1d931b79 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:26:02.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9719" for this suite. Jun 29 13:26:08.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:26:08.535: INFO: namespace var-expansion-9719 deletion completed in 6.196767768s • [SLOW TEST:10.655 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:26:08.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-66e6fa99-4587-48cb-8217-55e2b9eb627b STEP: Creating secret with name secret-projected-all-test-volume-f3d26c6f-9b35-4404-95db-b279da550c7b STEP: Creating a pod to test Check all projections for projected volume plugin Jun 29 13:26:08.647: INFO: Waiting up to 5m0s for pod "projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78" in namespace "projected-489" to be "success or failure" Jun 29 13:26:08.652: INFO: Pod "projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.973168ms Jun 29 13:26:10.693: INFO: Pod "projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045653503s Jun 29 13:26:12.698: INFO: Pod "projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05070243s STEP: Saw pod success Jun 29 13:26:12.698: INFO: Pod "projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78" satisfied condition "success or failure" Jun 29 13:26:12.702: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78 container projected-all-volume-test: STEP: delete the pod Jun 29 13:26:12.736: INFO: Waiting for pod projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78 to disappear Jun 29 13:26:12.743: INFO: Pod projected-volume-7e178fa9-2567-418c-89f7-fa9c5755eb78 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:26:12.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-489" for this suite. Jun 29 13:26:18.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:26:18.841: INFO: namespace projected-489 deletion completed in 6.094109849s • [SLOW TEST:10.306 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:26:18.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1859, will wait for the garbage collector to delete the pods Jun 29 13:26:23.136: INFO: Deleting Job.batch foo took: 6.99882ms Jun 29 13:26:23.436: INFO: Terminating Job.batch foo pods took: 300.281706ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:26:58.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1859" for this suite. Jun 29 13:27:04.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:27:04.236: INFO: namespace job-1859 deletion completed in 6.09328231s • [SLOW TEST:45.394 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:27:04.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:27:04.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac" in namespace "downward-api-9187" to be "success or failure" Jun 29 13:27:04.366: INFO: Pod "downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac": Phase="Pending", Reason="", readiness=false. Elapsed: 20.876759ms Jun 29 13:27:06.371: INFO: Pod "downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025218519s Jun 29 13:27:08.376: INFO: Pod "downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03011523s STEP: Saw pod success Jun 29 13:27:08.376: INFO: Pod "downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac" satisfied condition "success or failure" Jun 29 13:27:08.379: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac container client-container: STEP: delete the pod Jun 29 13:27:08.399: INFO: Waiting for pod downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac to disappear Jun 29 13:27:08.402: INFO: Pod downwardapi-volume-18b861c8-d9a6-4796-bae1-33010b24f9ac no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:27:08.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9187" for this suite. Jun 29 13:27:14.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:27:14.511: INFO: namespace downward-api-9187 deletion completed in 6.105436728s • [SLOW TEST:10.275 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:27:14.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 29 13:27:14.578: INFO: Waiting up to 5m0s for pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6" in namespace "containers-7191" to be "success or failure" Jun 29 13:27:14.599: INFO: Pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.411512ms Jun 29 13:27:16.604: INFO: Pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025625086s Jun 29 13:27:18.608: INFO: Pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.029799412s Jun 29 13:27:20.611: INFO: Pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03349458s STEP: Saw pod success Jun 29 13:27:20.612: INFO: Pod "client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6" satisfied condition "success or failure" Jun 29 13:27:20.614: INFO: Trying to get logs from node iruya-worker pod client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6 container test-container: STEP: delete the pod Jun 29 13:27:20.660: INFO: Waiting for pod client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6 to disappear Jun 29 13:27:20.688: INFO: Pod client-containers-8c16104d-3a20-4359-a0b1-d5939592acd6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:27:20.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7191" for this suite. Jun 29 13:27:26.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:27:26.853: INFO: namespace containers-7191 deletion completed in 6.161752225s • [SLOW TEST:12.342 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:27:26.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0f0c8840-2b51-40fe-a721-81a6cff63ad2 STEP: Creating a pod to test consume configMaps Jun 29 13:27:26.943: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8" in namespace "projected-1283" to be "success or failure" Jun 29 13:27:26.947: INFO: Pod "pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73219ms Jun 29 13:27:28.951: INFO: Pod "pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988254s Jun 29 13:27:31.102: INFO: Pod "pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158892279s STEP: Saw pod success Jun 29 13:27:31.102: INFO: Pod "pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8" satisfied condition "success or failure" Jun 29 13:27:31.105: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8 container projected-configmap-volume-test: STEP: delete the pod Jun 29 13:27:31.140: INFO: Waiting for pod pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8 to disappear Jun 29 13:27:31.176: INFO: Pod pod-projected-configmaps-0d587522-ae61-494d-bd9f-d31d766fcab8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:27:31.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1283" for this suite. Jun 29 13:27:37.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:27:37.261: INFO: namespace projected-1283 deletion completed in 6.081217333s • [SLOW TEST:10.407 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:27:37.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:27:37.356: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 29 13:27:37.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:37.367: INFO: Number of nodes with available pods: 0 Jun 29 13:27:37.367: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:27:38.373: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:38.377: INFO: Number of nodes with available pods: 0 Jun 29 13:27:38.377: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:27:39.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:39.429: INFO: Number of nodes with available pods: 0 Jun 29 13:27:39.429: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:27:40.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:40.376: INFO: Number of nodes with available pods: 0 Jun 29 13:27:40.376: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:27:41.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:41.375: INFO: Number of nodes with available pods: 0 Jun 29 13:27:41.375: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:27:42.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:42.376: INFO: Number of nodes with available pods: 2 Jun 29 13:27:42.376: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 29 13:27:42.410: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:42.410: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:42.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:43.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:43.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:43.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:44.435: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:44.435: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:44.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:45.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:45.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:45.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:46.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:46.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:46.436: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:46.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:47.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:47.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:47.436: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:47.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:48.435: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:48.435: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:48.435: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:48.438: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:49.435: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:49.435: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:49.435: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:49.437: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:50.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:50.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:50.436: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:50.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:51.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:51.436: INFO: Wrong image for pod: daemon-set-z22vf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:51.436: INFO: Pod daemon-set-z22vf is not available Jun 29 13:27:51.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:52.439: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:52.439: INFO: Pod daemon-set-lj7mm is not available Jun 29 13:27:52.443: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:53.504: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:53.504: INFO: Pod daemon-set-lj7mm is not available Jun 29 13:27:53.508: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:54.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:54.436: INFO: Pod daemon-set-lj7mm is not available Jun 29 13:27:54.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:55.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:55.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:56.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:56.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:57.435: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:57.435: INFO: Pod daemon-set-h8skm is not available Jun 29 13:27:57.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:58.439: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:58.439: INFO: Pod daemon-set-h8skm is not available Jun 29 13:27:58.444: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:27:59.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:27:59.436: INFO: Pod daemon-set-h8skm is not available Jun 29 13:27:59.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:00.436: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:28:00.436: INFO: Pod daemon-set-h8skm is not available Jun 29 13:28:00.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:01.435: INFO: Wrong image for pod: daemon-set-h8skm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 29 13:28:01.435: INFO: Pod daemon-set-h8skm is not available Jun 29 13:28:01.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:02.451: INFO: Pod daemon-set-qzxpd is not available Jun 29 13:28:02.454: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 29 13:28:02.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:02.460: INFO: Number of nodes with available pods: 1 Jun 29 13:28:02.460: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:28:03.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:03.468: INFO: Number of nodes with available pods: 1 Jun 29 13:28:03.468: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:28:04.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:04.469: INFO: Number of nodes with available pods: 1 Jun 29 13:28:04.469: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:28:05.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:05.467: INFO: Number of nodes with available pods: 1 Jun 29 13:28:05.467: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:28:06.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:28:06.469: INFO: Number of nodes with available pods: 2 Jun 29 13:28:06.469: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9600, will wait for the garbage collector to delete the pods Jun 29 13:28:06.544: INFO: Deleting DaemonSet.extensions daemon-set took: 6.691312ms Jun 29 13:28:06.845: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.485768ms Jun 29 13:28:12.264: INFO: Number of nodes with available pods: 0 Jun 29 13:28:12.264: INFO: Number of running nodes: 0, number of available pods: 0 Jun 29 13:28:12.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9600/daemonsets","resourceVersion":"19108091"},"items":null} Jun 29 13:28:12.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9600/pods","resourceVersion":"19108091"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:28:12.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9600" for this suite. Jun 29 13:28:18.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:28:18.430: INFO: namespace daemonsets-9600 deletion completed in 6.149205284s • [SLOW TEST:41.169 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:28:18.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:28:18.469: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 29 13:28:18.534: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 29 13:28:23.546: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 29 13:28:23.546: INFO: Creating deployment "test-rolling-update-deployment" Jun 29 13:28:23.551: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 29 13:28:23.576: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 29 13:28:25.585: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 29 13:28:25.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729034103, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729034103, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729034103, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729034103, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 13:28:27.592: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 29 13:28:27.600: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7876,SelfLink:/apis/apps/v1/namespaces/deployment-7876/deployments/test-rolling-update-deployment,UID:656f207a-935c-475a-bced-5750f5527196,ResourceVersion:19108195,Generation:1,CreationTimestamp:2020-06-29 13:28:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-29 13:28:23 +0000 UTC 2020-06-29 13:28:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-29 13:28:27 +0000 UTC 2020-06-29 13:28:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 29 13:28:27.603: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7876,SelfLink:/apis/apps/v1/namespaces/deployment-7876/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:06717135-8a3d-4687-9b80-2d6f964b6652,ResourceVersion:19108185,Generation:1,CreationTimestamp:2020-06-29 13:28:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 656f207a-935c-475a-bced-5750f5527196 0xc002d9df07 0xc002d9df08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 29 13:28:27.603: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 29 13:28:27.603: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7876,SelfLink:/apis/apps/v1/namespaces/deployment-7876/replicasets/test-rolling-update-controller,UID:a68e6744-ecae-456a-a440-182fb35d9a9c,ResourceVersion:19108194,Generation:2,CreationTimestamp:2020-06-29 13:28:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 656f207a-935c-475a-bced-5750f5527196 0xc002d9de27 0xc002d9de28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 13:28:27.606: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-fx8j9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-fx8j9,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7876,SelfLink:/api/v1/namespaces/deployment-7876/pods/test-rolling-update-deployment-79f6b9d75c-fx8j9,UID:2a48a301-7dad-4382-92cd-43d394db4ff0,ResourceVersion:19108184,Generation:0,CreationTimestamp:2020-06-29 13:28:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 06717135-8a3d-4687-9b80-2d6f964b6652 0xc000b08107 0xc000b08108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b8gfh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b8gfh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-b8gfh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b085b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b085d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:28:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:28:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:28:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:28:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.216,StartTime:2020-06-29 13:28:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-29 13:28:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6261a67df90b5e45149d352ceebde87a579c4e1675a3a8f786615e275c21f733}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:28:27.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7876" for this suite. Jun 29 13:28:33.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:28:33.728: INFO: namespace deployment-7876 deletion completed in 6.118865785s • [SLOW TEST:15.298 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:28:33.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 29 13:28:33.902: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:28:41.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4289" for this suite. Jun 29 13:28:47.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:28:47.431: INFO: namespace init-container-4289 deletion completed in 6.113244229s • [SLOW TEST:13.703 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:28:47.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 29 13:28:47.516: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4010" to be "success or failure" Jun 29 13:28:47.525: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.023995ms Jun 29 13:28:49.529: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012828038s Jun 29 13:28:51.533: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01683853s Jun 29 13:28:53.537: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020750339s STEP: Saw pod success Jun 29 13:28:53.537: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 29 13:28:53.539: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 29 13:28:53.571: INFO: Waiting for pod pod-host-path-test to disappear Jun 29 13:28:53.585: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:28:53.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4010" for this suite. Jun 29 13:28:59.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:28:59.677: INFO: namespace hostpath-4010 deletion completed in 6.089908972s • [SLOW TEST:12.246 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:28:59.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-15228354-5366-413a-8d83-cc2a3a840882 STEP: Creating a pod to test consume configMaps Jun 29 13:28:59.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1" in namespace "configmap-2812" to be "success or failure" Jun 29 13:28:59.836: INFO: Pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043578ms Jun 29 13:29:01.984: INFO: Pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158007588s Jun 29 13:29:03.988: INFO: Pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.16292298s Jun 29 13:29:05.993: INFO: Pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167586624s STEP: Saw pod success Jun 29 13:29:05.993: INFO: Pod "pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1" satisfied condition "success or failure" Jun 29 13:29:05.997: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1 container configmap-volume-test: STEP: delete the pod Jun 29 13:29:06.024: INFO: Waiting for pod pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1 to disappear Jun 29 13:29:06.027: INFO: Pod pod-configmaps-1d91e1ae-1283-47a8-9f34-f2b0c64860a1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:29:06.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2812" for this suite. Jun 29 13:29:12.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:29:12.121: INFO: namespace configmap-2812 deletion completed in 6.091389034s • [SLOW TEST:12.443 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:29:12.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:29:12.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e" in namespace "downward-api-1921" to be "success or failure" Jun 29 13:29:12.222: INFO: Pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.842915ms Jun 29 13:29:14.226: INFO: Pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018958157s Jun 29 13:29:16.231: INFO: Pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.023857366s Jun 29 13:29:18.235: INFO: Pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028694591s STEP: Saw pod success Jun 29 13:29:18.235: INFO: Pod "downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e" satisfied condition "success or failure" Jun 29 13:29:18.238: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e container client-container: STEP: delete the pod Jun 29 13:29:18.279: INFO: Waiting for pod downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e to disappear Jun 29 13:29:18.307: INFO: Pod downwardapi-volume-87968845-2614-4622-a1b0-5807e1611d0e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:29:18.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1921" for this suite. Jun 29 13:29:24.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:29:24.405: INFO: namespace downward-api-1921 deletion completed in 6.094274118s • [SLOW TEST:12.283 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:29:24.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-445a0ba6-b0a9-46e3-81cb-e229872de4e1 STEP: Creating a pod to test consume configMaps Jun 29 13:29:24.499: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7" in namespace "projected-8087" to be "success or failure" Jun 29 13:29:24.508: INFO: Pod "pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96693ms Jun 29 13:29:26.513: INFO: Pod "pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01326016s Jun 29 13:29:28.517: INFO: Pod "pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017640273s STEP: Saw pod success Jun 29 13:29:28.517: INFO: Pod "pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7" satisfied condition "success or failure" Jun 29 13:29:28.520: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7 container projected-configmap-volume-test: STEP: delete the pod Jun 29 13:29:28.533: INFO: Waiting for pod pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7 to disappear Jun 29 13:29:28.550: INFO: Pod pod-projected-configmaps-d288ca86-14bd-412e-8b36-18295904d0c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:29:28.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8087" for this suite. Jun 29 13:29:34.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:29:34.668: INFO: namespace projected-8087 deletion completed in 6.114686783s • [SLOW TEST:10.262 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:29:34.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 29 13:29:34.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:34.872: INFO: Number of nodes with available pods: 0 Jun 29 13:29:34.872: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:29:35.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:35.880: INFO: Number of nodes with available pods: 0 Jun 29 13:29:35.880: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:29:36.993: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:36.996: INFO: Number of nodes with available pods: 0 Jun 29 13:29:36.996: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:29:37.908: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:37.910: INFO: Number of nodes with available pods: 0 Jun 29 13:29:37.910: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:29:38.878: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:38.882: INFO: Number of nodes with available pods: 0 Jun 29 13:29:38.882: INFO: Node iruya-worker is running more than one daemon pod Jun 29 13:29:39.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:39.881: INFO: Number of nodes with available pods: 2 Jun 29 13:29:39.881: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 29 13:29:39.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 13:29:39.926: INFO: Number of nodes with available pods: 2 Jun 29 13:29:39.926: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8041, will wait for the garbage collector to delete the pods Jun 29 13:29:41.015: INFO: Deleting DaemonSet.extensions daemon-set took: 6.713061ms Jun 29 13:29:41.315: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266122ms Jun 29 13:29:44.518: INFO: Number of nodes with available pods: 0 Jun 29 13:29:44.518: INFO: Number of running nodes: 0, number of available pods: 0 Jun 29 13:29:44.521: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8041/daemonsets","resourceVersion":"19108569"},"items":null} Jun 29 13:29:44.523: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8041/pods","resourceVersion":"19108569"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:29:44.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8041" for this suite. Jun 29 13:29:50.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:29:50.642: INFO: namespace daemonsets-8041 deletion completed in 6.107422797s • [SLOW TEST:15.974 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:29:50.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0629 13:30:30.932791 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 13:30:30.932: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:30:30.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7030" for this suite. Jun 29 13:30:40.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:30:41.083: INFO: namespace gc-7030 deletion completed in 10.147361729s • [SLOW TEST:50.437 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:30:41.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-db2b60b2-166f-4eea-8b0d-2fd03f25d530 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:30:45.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1479" for this suite. Jun 29 13:31:07.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:31:07.338: INFO: namespace configmap-1479 deletion completed in 22.156740784s • [SLOW TEST:26.254 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:31:07.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 29 13:31:07.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 29 13:31:07.438: INFO: Waiting for terminating namespaces to be deleted... Jun 29 13:31:07.440: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 29 13:31:07.445: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.445: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 13:31:07.445: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.445: INFO: Container kindnet-cni ready: true, restart count 4 Jun 29 13:31:07.445: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 29 13:31:07.451: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.451: INFO: Container coredns ready: true, restart count 0 Jun 29 13:31:07.451: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.451: INFO: Container coredns ready: true, restart count 0 Jun 29 13:31:07.451: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.451: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 13:31:07.451: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 29 13:31:07.451: INFO: Container kindnet-cni ready: true, restart count 4 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 29 13:31:07.560: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 29 13:31:07.560: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 29 13:31:07.560: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 29 13:31:07.560: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 29 13:31:07.560: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 29 13:31:07.560: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac.161d06ef3268c3b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8190/filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac.161d06ef809de4c4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac.161d06efe4df3ddf], Reason = [Created], Message = [Created container filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac] STEP: Considering event: Type = [Normal], Name = [filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac.161d06eff82c1571], Reason = [Started], Message = [Started container filler-pod-a75cd379-cae1-4fd5-bda9-38be5bb1feac] STEP: Considering event: Type = [Normal], Name = [filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655.161d06ef3451a583], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8190/filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655.161d06efb390557a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655.161d06f00017eb70], Reason = [Created], Message = [Created container filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655] STEP: Considering event: Type = [Normal], Name = [filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655.161d06f00dbf6708], Reason = [Started], Message = [Started container filler-pod-cab669b4-fe56-4ca8-b931-217b3c13f655] STEP: Considering event: Type = [Warning], Name = [additional-pod.161d06f09b7b9e33], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:31:14.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8190" for this suite. Jun 29 13:31:22.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:31:22.797: INFO: namespace sched-pred-8190 deletion completed in 8.092035716s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.459 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:31:22.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 29 13:31:22.837: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:31:30.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2018" for this suite. Jun 29 13:31:52.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:31:52.883: INFO: namespace init-container-2018 deletion completed in 22.0844742s • [SLOW TEST:30.086 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:31:52.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:31:57.095: INFO: Waiting up to 5m0s for pod "client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c" in namespace "pods-4815" to be "success or failure" Jun 29 13:31:57.100: INFO: Pod "client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663492ms Jun 29 13:31:59.104: INFO: Pod "client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009181971s Jun 29 13:32:01.109: INFO: Pod "client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013430493s STEP: Saw pod success Jun 29 13:32:01.109: INFO: Pod "client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c" satisfied condition "success or failure" Jun 29 13:32:01.112: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c container env3cont: STEP: delete the pod Jun 29 13:32:01.171: INFO: Waiting for pod client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c to disappear Jun 29 13:32:01.287: INFO: Pod client-envvars-f21fbfe7-a904-4bd2-b383-8ff4bba24e9c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:32:01.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4815" for this suite. Jun 29 13:32:43.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:32:43.394: INFO: namespace pods-4815 deletion completed in 42.098483056s • [SLOW TEST:50.509 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:32:43.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:32:43.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c" in namespace "projected-5453" to be "success or failure" Jun 29 13:32:43.502: INFO: Pod "downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.499199ms Jun 29 13:32:45.506: INFO: Pod "downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013243686s Jun 29 13:32:47.509: INFO: Pod "downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016427284s STEP: Saw pod success Jun 29 13:32:47.509: INFO: Pod "downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c" satisfied condition "success or failure" Jun 29 13:32:47.512: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c container client-container: STEP: delete the pod Jun 29 13:32:47.546: INFO: Waiting for pod downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c to disappear Jun 29 13:32:47.592: INFO: Pod downwardapi-volume-c96545a4-7084-406b-bec0-de94fb10224c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:32:47.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5453" for this suite. Jun 29 13:32:53.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:32:53.693: INFO: namespace projected-5453 deletion completed in 6.09687703s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:32:53.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5446 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5446 STEP: Creating statefulset with conflicting port in namespace statefulset-5446 STEP: Waiting until pod test-pod will start running in namespace statefulset-5446 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5446 Jun 29 13:32:57.840: INFO: Observed stateful pod in namespace: statefulset-5446, name: ss-0, uid: bf443f95-aae9-4ab6-a7c8-80f7c8531409, status phase: Pending. Waiting for statefulset controller to delete. Jun 29 13:32:58.387: INFO: Observed stateful pod in namespace: statefulset-5446, name: ss-0, uid: bf443f95-aae9-4ab6-a7c8-80f7c8531409, status phase: Failed. Waiting for statefulset controller to delete. Jun 29 13:32:58.480: INFO: Observed stateful pod in namespace: statefulset-5446, name: ss-0, uid: bf443f95-aae9-4ab6-a7c8-80f7c8531409, status phase: Failed. Waiting for statefulset controller to delete. Jun 29 13:32:58.489: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5446 STEP: Removing pod with conflicting port in namespace statefulset-5446 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5446 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 29 13:33:04.553: INFO: Deleting all statefulset in ns statefulset-5446 Jun 29 13:33:04.556: INFO: Scaling statefulset ss to 0 Jun 29 13:33:14.587: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 13:33:14.590: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:33:14.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5446" for this suite. Jun 29 13:33:20.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:33:20.692: INFO: namespace statefulset-5446 deletion completed in 6.081599995s • [SLOW TEST:26.999 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:33:20.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:33:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1225" for this suite. Jun 29 13:34:02.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:34:02.900: INFO: namespace kubelet-test-1225 deletion completed in 38.099160551s • [SLOW TEST:42.208 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:34:02.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:35:02.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7212" for this suite. Jun 29 13:35:25.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:35:25.104: INFO: namespace container-probe-7212 deletion completed in 22.109706918s • [SLOW TEST:82.203 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:35:25.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:35:25.136: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:35:31.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9267" for this suite. Jun 29 13:36:13.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:36:13.403: INFO: namespace pods-9267 deletion completed in 42.114361011s • [SLOW TEST:48.299 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:36:13.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 29 13:36:13.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6005 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 29 13:36:19.428: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0629 13:36:19.357328 1185 log.go:172] (0xc0006e89a0) (0xc00070cbe0) Create stream\nI0629 13:36:19.357370 1185 log.go:172] (0xc0006e89a0) (0xc00070cbe0) Stream added, broadcasting: 1\nI0629 13:36:19.361897 1185 log.go:172] (0xc0006e89a0) Reply frame received for 1\nI0629 13:36:19.361939 1185 log.go:172] (0xc0006e89a0) (0xc000457ea0) Create stream\nI0629 13:36:19.361950 1185 log.go:172] (0xc0006e89a0) (0xc000457ea0) Stream added, broadcasting: 3\nI0629 13:36:19.363018 1185 log.go:172] (0xc0006e89a0) Reply frame received for 3\nI0629 13:36:19.363066 1185 log.go:172] (0xc0006e89a0) (0xc00070c0a0) Create stream\nI0629 13:36:19.363087 1185 log.go:172] (0xc0006e89a0) (0xc00070c0a0) Stream added, broadcasting: 5\nI0629 13:36:19.364112 1185 log.go:172] (0xc0006e89a0) Reply frame received for 5\nI0629 13:36:19.364157 1185 log.go:172] (0xc0006e89a0) (0xc000314140) Create stream\nI0629 13:36:19.364177 1185 log.go:172] (0xc0006e89a0) (0xc000314140) Stream added, broadcasting: 7\nI0629 13:36:19.365151 1185 log.go:172] (0xc0006e89a0) Reply frame received for 7\nI0629 13:36:19.365320 1185 log.go:172] (0xc000457ea0) (3) Writing data frame\nI0629 13:36:19.365471 1185 log.go:172] (0xc000457ea0) (3) Writing data frame\nI0629 13:36:19.366406 1185 log.go:172] (0xc0006e89a0) Data frame received for 5\nI0629 13:36:19.366433 1185 log.go:172] (0xc00070c0a0) (5) Data frame handling\nI0629 13:36:19.366451 1185 log.go:172] (0xc00070c0a0) (5) Data frame sent\nI0629 13:36:19.366925 1185 log.go:172] (0xc0006e89a0) Data frame received for 5\nI0629 13:36:19.366949 1185 log.go:172] (0xc00070c0a0) (5) Data frame handling\nI0629 13:36:19.366967 1185 log.go:172] (0xc00070c0a0) (5) Data frame sent\nI0629 13:36:19.403785 1185 log.go:172] (0xc0006e89a0) Data frame received for 7\nI0629 13:36:19.403839 1185 log.go:172] (0xc000314140) (7) Data frame handling\nI0629 13:36:19.403863 1185 log.go:172] (0xc0006e89a0) Data frame received for 5\nI0629 13:36:19.403873 1185 log.go:172] (0xc00070c0a0) (5) Data frame handling\nI0629 13:36:19.404229 1185 log.go:172] (0xc0006e89a0) Data frame received for 1\nI0629 13:36:19.404265 1185 log.go:172] (0xc0006e89a0) (0xc000457ea0) Stream removed, broadcasting: 3\nI0629 13:36:19.404299 1185 log.go:172] (0xc00070cbe0) (1) Data frame handling\nI0629 13:36:19.404325 1185 log.go:172] (0xc00070cbe0) (1) Data frame sent\nI0629 13:36:19.404346 1185 log.go:172] (0xc0006e89a0) (0xc00070cbe0) Stream removed, broadcasting: 1\nI0629 13:36:19.404371 1185 log.go:172] (0xc0006e89a0) Go away received\nI0629 13:36:19.404684 1185 log.go:172] (0xc0006e89a0) (0xc00070cbe0) Stream removed, broadcasting: 1\nI0629 13:36:19.404717 1185 log.go:172] (0xc0006e89a0) (0xc000457ea0) Stream removed, broadcasting: 3\nI0629 13:36:19.404734 1185 log.go:172] (0xc0006e89a0) (0xc00070c0a0) Stream removed, broadcasting: 5\nI0629 13:36:19.404750 1185 log.go:172] (0xc0006e89a0) (0xc000314140) Stream removed, broadcasting: 7\n" Jun 29 13:36:19.428: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:36:21.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6005" for this suite. Jun 29 13:36:33.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:36:33.531: INFO: namespace kubectl-6005 deletion completed in 12.093201494s • [SLOW TEST:20.127 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:36:33.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:36:37.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8758" for this suite. Jun 29 13:36:43.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:36:43.844: INFO: namespace emptydir-wrapper-8758 deletion completed in 6.088677657s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:36:43.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:36:47.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7901" for this suite. Jun 29 13:37:37.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:37:38.037: INFO: namespace kubelet-test-7901 deletion completed in 50.090342487s • [SLOW TEST:54.193 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:37:38.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 29 13:37:42.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 29 13:37:52.266: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:37:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7381" for this suite. Jun 29 13:37:58.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:37:58.372: INFO: namespace pods-7381 deletion completed in 6.097179779s • [SLOW TEST:20.334 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:37:58.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 29 13:37:58.435: INFO: Waiting up to 5m0s for pod "pod-19b10cb9-4f1e-42da-8a51-9602818c70dc" in namespace "emptydir-825" to be "success or failure" Jun 29 13:37:58.439: INFO: Pod "pod-19b10cb9-4f1e-42da-8a51-9602818c70dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.76512ms Jun 29 13:38:00.443: INFO: Pod "pod-19b10cb9-4f1e-42da-8a51-9602818c70dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007059623s Jun 29 13:38:02.480: INFO: Pod "pod-19b10cb9-4f1e-42da-8a51-9602818c70dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044655389s STEP: Saw pod success Jun 29 13:38:02.480: INFO: Pod "pod-19b10cb9-4f1e-42da-8a51-9602818c70dc" satisfied condition "success or failure" Jun 29 13:38:02.483: INFO: Trying to get logs from node iruya-worker2 pod pod-19b10cb9-4f1e-42da-8a51-9602818c70dc container test-container: STEP: delete the pod Jun 29 13:38:02.518: INFO: Waiting for pod pod-19b10cb9-4f1e-42da-8a51-9602818c70dc to disappear Jun 29 13:38:02.528: INFO: Pod pod-19b10cb9-4f1e-42da-8a51-9602818c70dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:38:02.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-825" for this suite. Jun 29 13:38:08.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:38:08.620: INFO: namespace emptydir-825 deletion completed in 6.08882593s • [SLOW TEST:10.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:38:08.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:38:08.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d" in namespace "projected-6388" to be "success or failure" Jun 29 13:38:08.719: INFO: Pod "downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.705166ms Jun 29 13:38:10.723: INFO: Pod "downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033723892s Jun 29 13:38:12.728: INFO: Pod "downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038257548s STEP: Saw pod success Jun 29 13:38:12.728: INFO: Pod "downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d" satisfied condition "success or failure" Jun 29 13:38:12.731: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d container client-container: STEP: delete the pod Jun 29 13:38:12.776: INFO: Waiting for pod downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d to disappear Jun 29 13:38:12.794: INFO: Pod downwardapi-volume-439d56c7-56a6-4546-892d-414e72aee07d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:38:12.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6388" for this suite. Jun 29 13:38:18.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:38:18.923: INFO: namespace projected-6388 deletion completed in 6.097714592s • [SLOW TEST:10.303 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:38:18.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-78 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[] Jun 29 13:38:19.163: INFO: Get endpoints failed (31.043222ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 29 13:38:20.168: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[] (1.035103704s elapsed) STEP: Creating pod pod1 in namespace services-78 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod1:[100]] Jun 29 13:38:23.252: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod1:[100]] (3.077344915s elapsed) STEP: Creating pod pod2 in namespace services-78 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod1:[100] pod2:[101]] Jun 29 13:38:27.367: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod1:[100] pod2:[101]] (4.11109689s elapsed) STEP: Deleting pod pod1 in namespace services-78 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod2:[101]] Jun 29 13:38:27.454: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod2:[101]] (83.449925ms elapsed) STEP: Deleting pod pod2 in namespace services-78 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[] Jun 29 13:38:28.473: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[] (1.014129081s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:38:28.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-78" for this suite. Jun 29 13:38:34.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:38:34.598: INFO: namespace services-78 deletion completed in 6.092514043s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.675 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:38:34.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 29 13:38:34.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110433,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 29 13:38:34.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110434,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 29 13:38:34.718: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110435,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 29 13:38:44.747: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110456,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 29 13:38:44.747: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110457,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 29 13:38:44.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7428,SelfLink:/api/v1/namespaces/watch-7428/configmaps/e2e-watch-test-label-changed,UID:2dea0064-d5a6-4d96-b7da-c2ad55f264a4,ResourceVersion:19110458,Generation:0,CreationTimestamp:2020-06-29 13:38:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:38:44.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7428" for this suite. Jun 29 13:38:50.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:38:50.862: INFO: namespace watch-7428 deletion completed in 6.110357757s • [SLOW TEST:16.263 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:38:50.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 29 13:39:00.952: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:00.953: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:00.993486 6 log.go:172] (0xc000ff6630) (0xc003398a00) Create stream I0629 13:39:00.993516 6 log.go:172] (0xc000ff6630) (0xc003398a00) Stream added, broadcasting: 1 I0629 13:39:00.996733 6 log.go:172] (0xc000ff6630) Reply frame received for 1 I0629 13:39:00.996779 6 log.go:172] (0xc000ff6630) (0xc0026e9cc0) Create stream I0629 13:39:00.996795 6 log.go:172] (0xc000ff6630) (0xc0026e9cc0) Stream added, broadcasting: 3 I0629 13:39:00.998337 6 log.go:172] (0xc000ff6630) Reply frame received for 3 I0629 13:39:00.998392 6 log.go:172] (0xc000ff6630) (0xc0026e9d60) Create stream I0629 13:39:00.998405 6 log.go:172] (0xc000ff6630) (0xc0026e9d60) Stream added, broadcasting: 5 I0629 13:39:00.999437 6 log.go:172] (0xc000ff6630) Reply frame received for 5 I0629 13:39:01.094873 6 log.go:172] (0xc000ff6630) Data frame received for 3 I0629 13:39:01.094895 6 log.go:172] (0xc0026e9cc0) (3) Data frame handling I0629 13:39:01.094902 6 log.go:172] (0xc0026e9cc0) (3) Data frame sent I0629 13:39:01.094907 6 log.go:172] (0xc000ff6630) Data frame received for 3 I0629 13:39:01.094912 6 log.go:172] (0xc0026e9cc0) (3) Data frame handling I0629 13:39:01.095130 6 log.go:172] (0xc000ff6630) Data frame received for 5 I0629 13:39:01.095146 6 log.go:172] (0xc0026e9d60) (5) Data frame handling I0629 13:39:01.097305 6 log.go:172] (0xc000ff6630) Data frame received for 1 I0629 13:39:01.097317 6 log.go:172] (0xc003398a00) (1) Data frame handling I0629 13:39:01.097324 6 log.go:172] (0xc003398a00) (1) Data frame sent I0629 13:39:01.097334 6 log.go:172] (0xc000ff6630) (0xc003398a00) Stream removed, broadcasting: 1 I0629 13:39:01.097413 6 log.go:172] (0xc000ff6630) (0xc003398a00) Stream removed, broadcasting: 1 I0629 13:39:01.097428 6 log.go:172] (0xc000ff6630) (0xc0026e9cc0) Stream removed, broadcasting: 3 I0629 13:39:01.097626 6 log.go:172] (0xc000ff6630) Go away received I0629 13:39:01.097666 6 log.go:172] (0xc000ff6630) (0xc0026e9d60) Stream removed, broadcasting: 5 Jun 29 13:39:01.097: INFO: Exec stderr: "" Jun 29 13:39:01.097: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.097: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.133657 6 log.go:172] (0xc002832fd0) (0xc0026e9f40) Create stream I0629 13:39:01.133686 6 log.go:172] (0xc002832fd0) (0xc0026e9f40) Stream added, broadcasting: 1 I0629 13:39:01.136696 6 log.go:172] (0xc002832fd0) Reply frame received for 1 I0629 13:39:01.136741 6 log.go:172] (0xc002832fd0) (0xc002cfcfa0) Create stream I0629 13:39:01.136758 6 log.go:172] (0xc002832fd0) (0xc002cfcfa0) Stream added, broadcasting: 3 I0629 13:39:01.137851 6 log.go:172] (0xc002832fd0) Reply frame received for 3 I0629 13:39:01.137925 6 log.go:172] (0xc002832fd0) (0xc00271c0a0) Create stream I0629 13:39:01.137954 6 log.go:172] (0xc002832fd0) (0xc00271c0a0) Stream added, broadcasting: 5 I0629 13:39:01.138736 6 log.go:172] (0xc002832fd0) Reply frame received for 5 I0629 13:39:01.229459 6 log.go:172] (0xc002832fd0) Data frame received for 3 I0629 13:39:01.229494 6 log.go:172] (0xc002cfcfa0) (3) Data frame handling I0629 13:39:01.229514 6 log.go:172] (0xc002cfcfa0) (3) Data frame sent I0629 13:39:01.229714 6 log.go:172] (0xc002832fd0) Data frame received for 5 I0629 13:39:01.229760 6 log.go:172] (0xc00271c0a0) (5) Data frame handling I0629 13:39:01.229783 6 log.go:172] (0xc002832fd0) Data frame received for 3 I0629 13:39:01.229792 6 log.go:172] (0xc002cfcfa0) (3) Data frame handling I0629 13:39:01.230729 6 log.go:172] (0xc002832fd0) Data frame received for 1 I0629 13:39:01.230783 6 log.go:172] (0xc0026e9f40) (1) Data frame handling I0629 13:39:01.230813 6 log.go:172] (0xc0026e9f40) (1) Data frame sent I0629 13:39:01.230847 6 log.go:172] (0xc002832fd0) (0xc0026e9f40) Stream removed, broadcasting: 1 I0629 13:39:01.230866 6 log.go:172] (0xc002832fd0) Go away received I0629 13:39:01.230989 6 log.go:172] (0xc002832fd0) (0xc0026e9f40) Stream removed, broadcasting: 1 I0629 13:39:01.231005 6 log.go:172] (0xc002832fd0) (0xc002cfcfa0) Stream removed, broadcasting: 3 I0629 13:39:01.231013 6 log.go:172] (0xc002832fd0) (0xc00271c0a0) Stream removed, broadcasting: 5 Jun 29 13:39:01.231: INFO: Exec stderr: "" Jun 29 13:39:01.231: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.231: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.259805 6 log.go:172] (0xc000ff7600) (0xc003398e60) Create stream I0629 13:39:01.259834 6 log.go:172] (0xc000ff7600) (0xc003398e60) Stream added, broadcasting: 1 I0629 13:39:01.262739 6 log.go:172] (0xc000ff7600) Reply frame received for 1 I0629 13:39:01.262773 6 log.go:172] (0xc000ff7600) (0xc002cfd0e0) Create stream I0629 13:39:01.262784 6 log.go:172] (0xc000ff7600) (0xc002cfd0e0) Stream added, broadcasting: 3 I0629 13:39:01.263634 6 log.go:172] (0xc000ff7600) Reply frame received for 3 I0629 13:39:01.263680 6 log.go:172] (0xc000ff7600) (0xc002cfd180) Create stream I0629 13:39:01.263692 6 log.go:172] (0xc000ff7600) (0xc002cfd180) Stream added, broadcasting: 5 I0629 13:39:01.264415 6 log.go:172] (0xc000ff7600) Reply frame received for 5 I0629 13:39:01.331418 6 log.go:172] (0xc000ff7600) Data frame received for 5 I0629 13:39:01.331448 6 log.go:172] (0xc002cfd180) (5) Data frame handling I0629 13:39:01.331470 6 log.go:172] (0xc000ff7600) Data frame received for 3 I0629 13:39:01.331479 6 log.go:172] (0xc002cfd0e0) (3) Data frame handling I0629 13:39:01.331490 6 log.go:172] (0xc002cfd0e0) (3) Data frame sent I0629 13:39:01.331497 6 log.go:172] (0xc000ff7600) Data frame received for 3 I0629 13:39:01.331505 6 log.go:172] (0xc002cfd0e0) (3) Data frame handling I0629 13:39:01.335857 6 log.go:172] (0xc000ff7600) Data frame received for 1 I0629 13:39:01.335878 6 log.go:172] (0xc003398e60) (1) Data frame handling I0629 13:39:01.335897 6 log.go:172] (0xc003398e60) (1) Data frame sent I0629 13:39:01.335912 6 log.go:172] (0xc000ff7600) (0xc003398e60) Stream removed, broadcasting: 1 I0629 13:39:01.335995 6 log.go:172] (0xc000ff7600) (0xc003398e60) Stream removed, broadcasting: 1 I0629 13:39:01.336007 6 log.go:172] (0xc000ff7600) (0xc002cfd0e0) Stream removed, broadcasting: 3 I0629 13:39:01.336017 6 log.go:172] (0xc000ff7600) (0xc002cfd180) Stream removed, broadcasting: 5 Jun 29 13:39:01.336: INFO: Exec stderr: "" Jun 29 13:39:01.336: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.336: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.336116 6 log.go:172] (0xc000ff7600) Go away received I0629 13:39:01.362319 6 log.go:172] (0xc0023ffc30) (0xc002cfd220) Create stream I0629 13:39:01.362356 6 log.go:172] (0xc0023ffc30) (0xc002cfd220) Stream added, broadcasting: 1 I0629 13:39:01.365060 6 log.go:172] (0xc0023ffc30) Reply frame received for 1 I0629 13:39:01.365098 6 log.go:172] (0xc0023ffc30) (0xc0028b4140) Create stream I0629 13:39:01.365279 6 log.go:172] (0xc0023ffc30) (0xc0028b4140) Stream added, broadcasting: 3 I0629 13:39:01.366040 6 log.go:172] (0xc0023ffc30) Reply frame received for 3 I0629 13:39:01.366074 6 log.go:172] (0xc0023ffc30) (0xc002104460) Create stream I0629 13:39:01.366086 6 log.go:172] (0xc0023ffc30) (0xc002104460) Stream added, broadcasting: 5 I0629 13:39:01.367088 6 log.go:172] (0xc0023ffc30) Reply frame received for 5 I0629 13:39:01.431346 6 log.go:172] (0xc0023ffc30) Data frame received for 5 I0629 13:39:01.431405 6 log.go:172] (0xc002104460) (5) Data frame handling I0629 13:39:01.431442 6 log.go:172] (0xc0023ffc30) Data frame received for 3 I0629 13:39:01.431457 6 log.go:172] (0xc0028b4140) (3) Data frame handling I0629 13:39:01.431482 6 log.go:172] (0xc0028b4140) (3) Data frame sent I0629 13:39:01.431504 6 log.go:172] (0xc0023ffc30) Data frame received for 3 I0629 13:39:01.431518 6 log.go:172] (0xc0028b4140) (3) Data frame handling I0629 13:39:01.433397 6 log.go:172] (0xc0023ffc30) Data frame received for 1 I0629 13:39:01.433426 6 log.go:172] (0xc002cfd220) (1) Data frame handling I0629 13:39:01.433448 6 log.go:172] (0xc002cfd220) (1) Data frame sent I0629 13:39:01.433476 6 log.go:172] (0xc0023ffc30) (0xc002cfd220) Stream removed, broadcasting: 1 I0629 13:39:01.433514 6 log.go:172] (0xc0023ffc30) Go away received I0629 13:39:01.433575 6 log.go:172] (0xc0023ffc30) (0xc002cfd220) Stream removed, broadcasting: 1 I0629 13:39:01.433596 6 log.go:172] (0xc0023ffc30) (0xc0028b4140) Stream removed, broadcasting: 3 I0629 13:39:01.433619 6 log.go:172] (0xc0023ffc30) (0xc002104460) Stream removed, broadcasting: 5 Jun 29 13:39:01.433: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 29 13:39:01.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.433: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.464509 6 log.go:172] (0xc000c8b810) (0xc0028b4460) Create stream I0629 13:39:01.464533 6 log.go:172] (0xc000c8b810) (0xc0028b4460) Stream added, broadcasting: 1 I0629 13:39:01.467464 6 log.go:172] (0xc000c8b810) Reply frame received for 1 I0629 13:39:01.467543 6 log.go:172] (0xc000c8b810) (0xc00271c500) Create stream I0629 13:39:01.467561 6 log.go:172] (0xc000c8b810) (0xc00271c500) Stream added, broadcasting: 3 I0629 13:39:01.468489 6 log.go:172] (0xc000c8b810) Reply frame received for 3 I0629 13:39:01.468569 6 log.go:172] (0xc000c8b810) (0xc003398f00) Create stream I0629 13:39:01.468589 6 log.go:172] (0xc000c8b810) (0xc003398f00) Stream added, broadcasting: 5 I0629 13:39:01.469867 6 log.go:172] (0xc000c8b810) Reply frame received for 5 I0629 13:39:01.533862 6 log.go:172] (0xc000c8b810) Data frame received for 3 I0629 13:39:01.533886 6 log.go:172] (0xc00271c500) (3) Data frame handling I0629 13:39:01.533899 6 log.go:172] (0xc00271c500) (3) Data frame sent I0629 13:39:01.533905 6 log.go:172] (0xc000c8b810) Data frame received for 3 I0629 13:39:01.533910 6 log.go:172] (0xc00271c500) (3) Data frame handling I0629 13:39:01.534040 6 log.go:172] (0xc000c8b810) Data frame received for 5 I0629 13:39:01.534057 6 log.go:172] (0xc003398f00) (5) Data frame handling I0629 13:39:01.535507 6 log.go:172] (0xc000c8b810) Data frame received for 1 I0629 13:39:01.535528 6 log.go:172] (0xc0028b4460) (1) Data frame handling I0629 13:39:01.535546 6 log.go:172] (0xc0028b4460) (1) Data frame sent I0629 13:39:01.535563 6 log.go:172] (0xc000c8b810) (0xc0028b4460) Stream removed, broadcasting: 1 I0629 13:39:01.535581 6 log.go:172] (0xc000c8b810) Go away received I0629 13:39:01.535735 6 log.go:172] (0xc000c8b810) (0xc0028b4460) Stream removed, broadcasting: 1 I0629 13:39:01.535765 6 log.go:172] (0xc000c8b810) (0xc00271c500) Stream removed, broadcasting: 3 I0629 13:39:01.535784 6 log.go:172] (0xc000c8b810) (0xc003398f00) Stream removed, broadcasting: 5 Jun 29 13:39:01.535: INFO: Exec stderr: "" Jun 29 13:39:01.535: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.535: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.571488 6 log.go:172] (0xc002ec4160) (0xc0028b4780) Create stream I0629 13:39:01.571523 6 log.go:172] (0xc002ec4160) (0xc0028b4780) Stream added, broadcasting: 1 I0629 13:39:01.575298 6 log.go:172] (0xc002ec4160) Reply frame received for 1 I0629 13:39:01.575333 6 log.go:172] (0xc002ec4160) (0xc002cfd360) Create stream I0629 13:39:01.575432 6 log.go:172] (0xc002ec4160) (0xc002cfd360) Stream added, broadcasting: 3 I0629 13:39:01.576329 6 log.go:172] (0xc002ec4160) Reply frame received for 3 I0629 13:39:01.576355 6 log.go:172] (0xc002ec4160) (0xc002cfd400) Create stream I0629 13:39:01.576371 6 log.go:172] (0xc002ec4160) (0xc002cfd400) Stream added, broadcasting: 5 I0629 13:39:01.577411 6 log.go:172] (0xc002ec4160) Reply frame received for 5 I0629 13:39:01.632357 6 log.go:172] (0xc002ec4160) Data frame received for 5 I0629 13:39:01.632388 6 log.go:172] (0xc002cfd400) (5) Data frame handling I0629 13:39:01.632437 6 log.go:172] (0xc002ec4160) Data frame received for 3 I0629 13:39:01.632491 6 log.go:172] (0xc002cfd360) (3) Data frame handling I0629 13:39:01.632527 6 log.go:172] (0xc002cfd360) (3) Data frame sent I0629 13:39:01.632550 6 log.go:172] (0xc002ec4160) Data frame received for 3 I0629 13:39:01.632567 6 log.go:172] (0xc002cfd360) (3) Data frame handling I0629 13:39:01.634438 6 log.go:172] (0xc002ec4160) Data frame received for 1 I0629 13:39:01.634469 6 log.go:172] (0xc0028b4780) (1) Data frame handling I0629 13:39:01.634493 6 log.go:172] (0xc0028b4780) (1) Data frame sent I0629 13:39:01.634517 6 log.go:172] (0xc002ec4160) (0xc0028b4780) Stream removed, broadcasting: 1 I0629 13:39:01.634557 6 log.go:172] (0xc002ec4160) Go away received I0629 13:39:01.634638 6 log.go:172] (0xc002ec4160) (0xc0028b4780) Stream removed, broadcasting: 1 I0629 13:39:01.634679 6 log.go:172] (0xc002ec4160) (0xc002cfd360) Stream removed, broadcasting: 3 I0629 13:39:01.634703 6 log.go:172] (0xc002ec4160) (0xc002cfd400) Stream removed, broadcasting: 5 Jun 29 13:39:01.634: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 29 13:39:01.634: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.634: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.687544 6 log.go:172] (0xc002ebcd10) (0xc00271c820) Create stream I0629 13:39:01.687582 6 log.go:172] (0xc002ebcd10) (0xc00271c820) Stream added, broadcasting: 1 I0629 13:39:01.690401 6 log.go:172] (0xc002ebcd10) Reply frame received for 1 I0629 13:39:01.690477 6 log.go:172] (0xc002ebcd10) (0xc0028b4820) Create stream I0629 13:39:01.690491 6 log.go:172] (0xc002ebcd10) (0xc0028b4820) Stream added, broadcasting: 3 I0629 13:39:01.691490 6 log.go:172] (0xc002ebcd10) Reply frame received for 3 I0629 13:39:01.691531 6 log.go:172] (0xc002ebcd10) (0xc002104500) Create stream I0629 13:39:01.691544 6 log.go:172] (0xc002ebcd10) (0xc002104500) Stream added, broadcasting: 5 I0629 13:39:01.692479 6 log.go:172] (0xc002ebcd10) Reply frame received for 5 I0629 13:39:01.761034 6 log.go:172] (0xc002ebcd10) Data frame received for 5 I0629 13:39:01.761059 6 log.go:172] (0xc002104500) (5) Data frame handling I0629 13:39:01.761324 6 log.go:172] (0xc002ebcd10) Data frame received for 3 I0629 13:39:01.761373 6 log.go:172] (0xc0028b4820) (3) Data frame handling I0629 13:39:01.761396 6 log.go:172] (0xc0028b4820) (3) Data frame sent I0629 13:39:01.761411 6 log.go:172] (0xc002ebcd10) Data frame received for 3 I0629 13:39:01.761424 6 log.go:172] (0xc0028b4820) (3) Data frame handling I0629 13:39:01.762961 6 log.go:172] (0xc002ebcd10) Data frame received for 1 I0629 13:39:01.762981 6 log.go:172] (0xc00271c820) (1) Data frame handling I0629 13:39:01.762990 6 log.go:172] (0xc00271c820) (1) Data frame sent I0629 13:39:01.763145 6 log.go:172] (0xc002ebcd10) (0xc00271c820) Stream removed, broadcasting: 1 I0629 13:39:01.763259 6 log.go:172] (0xc002ebcd10) (0xc00271c820) Stream removed, broadcasting: 1 I0629 13:39:01.763276 6 log.go:172] (0xc002ebcd10) (0xc0028b4820) Stream removed, broadcasting: 3 I0629 13:39:01.763345 6 log.go:172] (0xc002ebcd10) Go away received I0629 13:39:01.763477 6 log.go:172] (0xc002ebcd10) (0xc002104500) Stream removed, broadcasting: 5 Jun 29 13:39:01.763: INFO: Exec stderr: "" Jun 29 13:39:01.763: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.763: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.797430 6 log.go:172] (0xc002ebda20) (0xc00271cb40) Create stream I0629 13:39:01.797463 6 log.go:172] (0xc002ebda20) (0xc00271cb40) Stream added, broadcasting: 1 I0629 13:39:01.800307 6 log.go:172] (0xc002ebda20) Reply frame received for 1 I0629 13:39:01.800352 6 log.go:172] (0xc002ebda20) (0xc0021045a0) Create stream I0629 13:39:01.800372 6 log.go:172] (0xc002ebda20) (0xc0021045a0) Stream added, broadcasting: 3 I0629 13:39:01.801316 6 log.go:172] (0xc002ebda20) Reply frame received for 3 I0629 13:39:01.801370 6 log.go:172] (0xc002ebda20) (0xc002cfd4a0) Create stream I0629 13:39:01.801384 6 log.go:172] (0xc002ebda20) (0xc002cfd4a0) Stream added, broadcasting: 5 I0629 13:39:01.802258 6 log.go:172] (0xc002ebda20) Reply frame received for 5 I0629 13:39:01.853030 6 log.go:172] (0xc002ebda20) Data frame received for 3 I0629 13:39:01.853065 6 log.go:172] (0xc0021045a0) (3) Data frame handling I0629 13:39:01.853073 6 log.go:172] (0xc0021045a0) (3) Data frame sent I0629 13:39:01.853077 6 log.go:172] (0xc002ebda20) Data frame received for 3 I0629 13:39:01.853080 6 log.go:172] (0xc0021045a0) (3) Data frame handling I0629 13:39:01.853107 6 log.go:172] (0xc002ebda20) Data frame received for 5 I0629 13:39:01.853219 6 log.go:172] (0xc002cfd4a0) (5) Data frame handling I0629 13:39:01.854930 6 log.go:172] (0xc002ebda20) Data frame received for 1 I0629 13:39:01.854941 6 log.go:172] (0xc00271cb40) (1) Data frame handling I0629 13:39:01.854947 6 log.go:172] (0xc00271cb40) (1) Data frame sent I0629 13:39:01.855028 6 log.go:172] (0xc002ebda20) (0xc00271cb40) Stream removed, broadcasting: 1 I0629 13:39:01.855102 6 log.go:172] (0xc002ebda20) (0xc00271cb40) Stream removed, broadcasting: 1 I0629 13:39:01.855113 6 log.go:172] (0xc002ebda20) (0xc0021045a0) Stream removed, broadcasting: 3 I0629 13:39:01.855172 6 log.go:172] (0xc002ebda20) Go away received I0629 13:39:01.855215 6 log.go:172] (0xc002ebda20) (0xc002cfd4a0) Stream removed, broadcasting: 5 Jun 29 13:39:01.855: INFO: Exec stderr: "" Jun 29 13:39:01.855: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.855: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.889944 6 log.go:172] (0xc002f351e0) (0xc002cfd7c0) Create stream I0629 13:39:01.889982 6 log.go:172] (0xc002f351e0) (0xc002cfd7c0) Stream added, broadcasting: 1 I0629 13:39:01.897916 6 log.go:172] (0xc002f351e0) Reply frame received for 1 I0629 13:39:01.897977 6 log.go:172] (0xc002f351e0) (0xc0028b48c0) Create stream I0629 13:39:01.898021 6 log.go:172] (0xc002f351e0) (0xc0028b48c0) Stream added, broadcasting: 3 I0629 13:39:01.899501 6 log.go:172] (0xc002f351e0) Reply frame received for 3 I0629 13:39:01.899554 6 log.go:172] (0xc002f351e0) (0xc002df4000) Create stream I0629 13:39:01.899571 6 log.go:172] (0xc002f351e0) (0xc002df4000) Stream added, broadcasting: 5 I0629 13:39:01.900375 6 log.go:172] (0xc002f351e0) Reply frame received for 5 I0629 13:39:01.972059 6 log.go:172] (0xc002f351e0) Data frame received for 3 I0629 13:39:01.972115 6 log.go:172] (0xc0028b48c0) (3) Data frame handling I0629 13:39:01.972153 6 log.go:172] (0xc0028b48c0) (3) Data frame sent I0629 13:39:01.972169 6 log.go:172] (0xc002f351e0) Data frame received for 3 I0629 13:39:01.972182 6 log.go:172] (0xc0028b48c0) (3) Data frame handling I0629 13:39:01.972227 6 log.go:172] (0xc002f351e0) Data frame received for 5 I0629 13:39:01.972268 6 log.go:172] (0xc002df4000) (5) Data frame handling I0629 13:39:01.974176 6 log.go:172] (0xc002f351e0) Data frame received for 1 I0629 13:39:01.974197 6 log.go:172] (0xc002cfd7c0) (1) Data frame handling I0629 13:39:01.974222 6 log.go:172] (0xc002cfd7c0) (1) Data frame sent I0629 13:39:01.974247 6 log.go:172] (0xc002f351e0) (0xc002cfd7c0) Stream removed, broadcasting: 1 I0629 13:39:01.974314 6 log.go:172] (0xc002f351e0) Go away received I0629 13:39:01.974345 6 log.go:172] (0xc002f351e0) (0xc002cfd7c0) Stream removed, broadcasting: 1 I0629 13:39:01.974361 6 log.go:172] (0xc002f351e0) (0xc0028b48c0) Stream removed, broadcasting: 3 I0629 13:39:01.974373 6 log.go:172] (0xc002f351e0) (0xc002df4000) Stream removed, broadcasting: 5 Jun 29 13:39:01.974: INFO: Exec stderr: "" Jun 29 13:39:01.974: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4249 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:39:01.974: INFO: >>> kubeConfig: /root/.kube/config I0629 13:39:01.998650 6 log.go:172] (0xc002f34420) (0xc002258280) Create stream I0629 13:39:01.998682 6 log.go:172] (0xc002f34420) (0xc002258280) Stream added, broadcasting: 1 I0629 13:39:02.000795 6 log.go:172] (0xc002f34420) Reply frame received for 1 I0629 13:39:02.000848 6 log.go:172] (0xc002f34420) (0xc002df40a0) Create stream I0629 13:39:02.000861 6 log.go:172] (0xc002f34420) (0xc002df40a0) Stream added, broadcasting: 3 I0629 13:39:02.002039 6 log.go:172] (0xc002f34420) Reply frame received for 3 I0629 13:39:02.002086 6 log.go:172] (0xc002f34420) (0xc0003fcb40) Create stream I0629 13:39:02.002097 6 log.go:172] (0xc002f34420) (0xc0003fcb40) Stream added, broadcasting: 5 I0629 13:39:02.003062 6 log.go:172] (0xc002f34420) Reply frame received for 5 I0629 13:39:02.088047 6 log.go:172] (0xc002f34420) Data frame received for 5 I0629 13:39:02.088096 6 log.go:172] (0xc0003fcb40) (5) Data frame handling I0629 13:39:02.088123 6 log.go:172] (0xc002f34420) Data frame received for 3 I0629 13:39:02.088134 6 log.go:172] (0xc002df40a0) (3) Data frame handling I0629 13:39:02.088143 6 log.go:172] (0xc002df40a0) (3) Data frame sent I0629 13:39:02.088153 6 log.go:172] (0xc002f34420) Data frame received for 3 I0629 13:39:02.088158 6 log.go:172] (0xc002df40a0) (3) Data frame handling I0629 13:39:02.089698 6 log.go:172] (0xc002f34420) Data frame received for 1 I0629 13:39:02.089734 6 log.go:172] (0xc002258280) (1) Data frame handling I0629 13:39:02.089750 6 log.go:172] (0xc002258280) (1) Data frame sent I0629 13:39:02.089766 6 log.go:172] (0xc002f34420) (0xc002258280) Stream removed, broadcasting: 1 I0629 13:39:02.089838 6 log.go:172] (0xc002f34420) Go away received I0629 13:39:02.089988 6 log.go:172] (0xc002f34420) (0xc002258280) Stream removed, broadcasting: 1 I0629 13:39:02.090043 6 log.go:172] (0xc002f34420) (0xc002df40a0) Stream removed, broadcasting: 3 I0629 13:39:02.090071 6 log.go:172] (0xc002f34420) (0xc0003fcb40) Stream removed, broadcasting: 5 Jun 29 13:39:02.090: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:39:02.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4249" for this suite. Jun 29 13:39:52.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:39:52.202: INFO: namespace e2e-kubelet-etc-hosts-4249 deletion completed in 50.10837721s • [SLOW TEST:61.340 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:39:52.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 29 13:39:58.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d8db39ef-1c80-453f-b74d-1db10b2eff83 -c busybox-main-container --namespace=emptydir-1031 -- cat /usr/share/volumeshare/shareddata.txt' Jun 29 13:39:58.516: INFO: stderr: "I0629 13:39:58.436700 1236 log.go:172] (0xc00012ae70) (0xc000606a00) Create stream\nI0629 13:39:58.436775 1236 log.go:172] (0xc00012ae70) (0xc000606a00) Stream added, broadcasting: 1\nI0629 13:39:58.440592 1236 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0629 13:39:58.440941 1236 log.go:172] (0xc00012ae70) (0xc000606000) Create stream\nI0629 13:39:58.440961 1236 log.go:172] (0xc00012ae70) (0xc000606000) Stream added, broadcasting: 3\nI0629 13:39:58.442157 1236 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0629 13:39:58.442220 1236 log.go:172] (0xc00012ae70) (0xc000650280) Create stream\nI0629 13:39:58.442244 1236 log.go:172] (0xc00012ae70) (0xc000650280) Stream added, broadcasting: 5\nI0629 13:39:58.443273 1236 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0629 13:39:58.507370 1236 log.go:172] (0xc00012ae70) Data frame received for 5\nI0629 13:39:58.507397 1236 log.go:172] (0xc000650280) (5) Data frame handling\nI0629 13:39:58.507413 1236 log.go:172] (0xc00012ae70) Data frame received for 3\nI0629 13:39:58.507432 1236 log.go:172] (0xc000606000) (3) Data frame handling\nI0629 13:39:58.507446 1236 log.go:172] (0xc000606000) (3) Data frame sent\nI0629 13:39:58.507453 1236 log.go:172] (0xc00012ae70) Data frame received for 3\nI0629 13:39:58.507458 1236 log.go:172] (0xc000606000) (3) Data frame handling\nI0629 13:39:58.508986 1236 log.go:172] (0xc00012ae70) Data frame received for 1\nI0629 13:39:58.509017 1236 log.go:172] (0xc000606a00) (1) Data frame handling\nI0629 13:39:58.509032 1236 log.go:172] (0xc000606a00) (1) Data frame sent\nI0629 13:39:58.509053 1236 log.go:172] (0xc00012ae70) (0xc000606a00) Stream removed, broadcasting: 1\nI0629 13:39:58.509281 1236 log.go:172] (0xc00012ae70) Go away received\nI0629 13:39:58.509580 1236 log.go:172] (0xc00012ae70) (0xc000606a00) Stream removed, broadcasting: 1\nI0629 13:39:58.509601 1236 log.go:172] (0xc00012ae70) (0xc000606000) Stream removed, broadcasting: 3\nI0629 13:39:58.509613 1236 log.go:172] (0xc00012ae70) (0xc000650280) Stream removed, broadcasting: 5\n" Jun 29 13:39:58.517: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:39:58.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1031" for this suite. Jun 29 13:40:04.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:04.616: INFO: namespace emptydir-1031 deletion completed in 6.095878875s • [SLOW TEST:12.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:04.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-a1c5c79f-6a01-4965-b3eb-dde2ffc23d60 STEP: Creating a pod to test consume secrets Jun 29 13:40:04.754: INFO: Waiting up to 5m0s for pod "pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d" in namespace "secrets-7395" to be "success or failure" Jun 29 13:40:04.758: INFO: Pod "pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68693ms Jun 29 13:40:06.762: INFO: Pod "pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007330223s Jun 29 13:40:08.767: INFO: Pod "pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012163471s STEP: Saw pod success Jun 29 13:40:08.767: INFO: Pod "pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d" satisfied condition "success or failure" Jun 29 13:40:08.771: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d container secret-volume-test: STEP: delete the pod Jun 29 13:40:08.796: INFO: Waiting for pod pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d to disappear Jun 29 13:40:08.800: INFO: Pod pod-secrets-bde103f8-719c-4b6f-ae3e-f3c20d63aa8d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:08.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7395" for this suite. Jun 29 13:40:14.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:14.909: INFO: namespace secrets-7395 deletion completed in 6.10606075s • [SLOW TEST:10.293 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:14.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1857.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1857.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1857.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1857.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1857.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1857.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 13:40:21.072: INFO: DNS probes using dns-1857/dns-test-21d288ca-4578-4f41-9c2d-c2a741c19054 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:21.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1857" for this suite. Jun 29 13:40:27.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:27.383: INFO: namespace dns-1857 deletion completed in 6.251212633s • [SLOW TEST:12.474 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:27.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 29 13:40:27.450: INFO: Waiting up to 5m0s for pod "pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801" in namespace "emptydir-8155" to be "success or failure" Jun 29 13:40:27.466: INFO: Pod "pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801": Phase="Pending", Reason="", readiness=false. Elapsed: 16.018907ms Jun 29 13:40:29.469: INFO: Pod "pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019423118s Jun 29 13:40:31.474: INFO: Pod "pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023527656s STEP: Saw pod success Jun 29 13:40:31.474: INFO: Pod "pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801" satisfied condition "success or failure" Jun 29 13:40:31.476: INFO: Trying to get logs from node iruya-worker pod pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801 container test-container: STEP: delete the pod Jun 29 13:40:31.497: INFO: Waiting for pod pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801 to disappear Jun 29 13:40:31.502: INFO: Pod pod-18c0e1ca-1a26-4a6b-a5a5-8c367415c801 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:31.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8155" for this suite. Jun 29 13:40:37.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:37.594: INFO: namespace emptydir-8155 deletion completed in 6.088266169s • [SLOW TEST:10.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:37.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:37.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9035" for this suite. Jun 29 13:40:43.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:43.889: INFO: namespace kubelet-test-9035 deletion completed in 6.099169845s • [SLOW TEST:6.294 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:43.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 29 13:40:43.995: INFO: Waiting up to 5m0s for pod "pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd" in namespace "emptydir-7638" to be "success or failure" Jun 29 13:40:44.012: INFO: Pod "pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006147ms Jun 29 13:40:46.016: INFO: Pod "pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020884057s Jun 29 13:40:48.020: INFO: Pod "pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024683442s STEP: Saw pod success Jun 29 13:40:48.020: INFO: Pod "pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd" satisfied condition "success or failure" Jun 29 13:40:48.023: INFO: Trying to get logs from node iruya-worker2 pod pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd container test-container: STEP: delete the pod Jun 29 13:40:48.152: INFO: Waiting for pod pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd to disappear Jun 29 13:40:48.191: INFO: Pod pod-455490ac-9d81-4bcf-b5d8-c8486e6e7fcd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:48.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7638" for this suite. Jun 29 13:40:54.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:40:54.352: INFO: namespace emptydir-7638 deletion completed in 6.12261097s • [SLOW TEST:10.463 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:40:54.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 29 13:40:54.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 29 13:40:54.700: INFO: stderr: "" Jun 29 13:40:54.700: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:40:54.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9988" for this suite. Jun 29 13:41:00.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:41:00.860: INFO: namespace kubectl-9988 deletion completed in 6.154696938s • [SLOW TEST:6.507 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:41:00.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:41:31.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8528" for this suite. Jun 29 13:41:37.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:41:37.736: INFO: namespace container-runtime-8528 deletion completed in 6.106928164s • [SLOW TEST:36.875 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:41:37.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-314b67f7-f781-423c-b7e6-03c654706ae2 in namespace container-probe-8468 Jun 29 13:41:41.837: INFO: Started pod busybox-314b67f7-f781-423c-b7e6-03c654706ae2 in namespace container-probe-8468 STEP: checking the pod's current state and verifying that restartCount is present Jun 29 13:41:41.841: INFO: Initial restart count of pod busybox-314b67f7-f781-423c-b7e6-03c654706ae2 is 0 Jun 29 13:42:29.959: INFO: Restart count of pod container-probe-8468/busybox-314b67f7-f781-423c-b7e6-03c654706ae2 is now 1 (48.117722506s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:42:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8468" for this suite. Jun 29 13:42:36.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:42:36.092: INFO: namespace container-probe-8468 deletion completed in 6.107052624s • [SLOW TEST:58.356 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:42:36.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-98 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 29 13:42:36.150: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 29 13:43:00.365: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.53:8080/dial?request=hostName&protocol=http&host=10.244.1.52&port=8080&tries=1'] Namespace:pod-network-test-98 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:43:00.365: INFO: >>> kubeConfig: /root/.kube/config I0629 13:43:00.400624 6 log.go:172] (0xc001650790) (0xc0017a0320) Create stream I0629 13:43:00.400659 6 log.go:172] (0xc001650790) (0xc0017a0320) Stream added, broadcasting: 1 I0629 13:43:00.402455 6 log.go:172] (0xc001650790) Reply frame received for 1 I0629 13:43:00.402519 6 log.go:172] (0xc001650790) (0xc0016320a0) Create stream I0629 13:43:00.402546 6 log.go:172] (0xc001650790) (0xc0016320a0) Stream added, broadcasting: 3 I0629 13:43:00.403565 6 log.go:172] (0xc001650790) Reply frame received for 3 I0629 13:43:00.403625 6 log.go:172] (0xc001650790) (0xc00166cdc0) Create stream I0629 13:43:00.403641 6 log.go:172] (0xc001650790) (0xc00166cdc0) Stream added, broadcasting: 5 I0629 13:43:00.404417 6 log.go:172] (0xc001650790) Reply frame received for 5 I0629 13:43:00.512068 6 log.go:172] (0xc001650790) Data frame received for 3 I0629 13:43:00.512103 6 log.go:172] (0xc0016320a0) (3) Data frame handling I0629 13:43:00.512123 6 log.go:172] (0xc0016320a0) (3) Data frame sent I0629 13:43:00.512726 6 log.go:172] (0xc001650790) Data frame received for 3 I0629 13:43:00.512764 6 log.go:172] (0xc0016320a0) (3) Data frame handling I0629 13:43:00.513037 6 log.go:172] (0xc001650790) Data frame received for 5 I0629 13:43:00.513066 6 log.go:172] (0xc00166cdc0) (5) Data frame handling I0629 13:43:00.515272 6 log.go:172] (0xc001650790) Data frame received for 1 I0629 13:43:00.515307 6 log.go:172] (0xc0017a0320) (1) Data frame handling I0629 13:43:00.515333 6 log.go:172] (0xc0017a0320) (1) Data frame sent I0629 13:43:00.515361 6 log.go:172] (0xc001650790) (0xc0017a0320) Stream removed, broadcasting: 1 I0629 13:43:00.515410 6 log.go:172] (0xc001650790) Go away received I0629 13:43:00.515461 6 log.go:172] (0xc001650790) (0xc0017a0320) Stream removed, broadcasting: 1 I0629 13:43:00.515491 6 log.go:172] (0xc001650790) (0xc0016320a0) Stream removed, broadcasting: 3 I0629 13:43:00.515498 6 log.go:172] (0xc001650790) (0xc00166cdc0) Stream removed, broadcasting: 5 Jun 29 13:43:00.515: INFO: Waiting for endpoints: map[] Jun 29 13:43:00.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.53:8080/dial?request=hostName&protocol=http&host=10.244.2.239&port=8080&tries=1'] Namespace:pod-network-test-98 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:43:00.519: INFO: >>> kubeConfig: /root/.kube/config I0629 13:43:00.591576 6 log.go:172] (0xc001651130) (0xc0017a0a00) Create stream I0629 13:43:00.591618 6 log.go:172] (0xc001651130) (0xc0017a0a00) Stream added, broadcasting: 1 I0629 13:43:00.593801 6 log.go:172] (0xc001651130) Reply frame received for 1 I0629 13:43:00.593828 6 log.go:172] (0xc001651130) (0xc0017a0aa0) Create stream I0629 13:43:00.593838 6 log.go:172] (0xc001651130) (0xc0017a0aa0) Stream added, broadcasting: 3 I0629 13:43:00.594694 6 log.go:172] (0xc001651130) Reply frame received for 3 I0629 13:43:00.594735 6 log.go:172] (0xc001651130) (0xc00166ce60) Create stream I0629 13:43:00.594751 6 log.go:172] (0xc001651130) (0xc00166ce60) Stream added, broadcasting: 5 I0629 13:43:00.595583 6 log.go:172] (0xc001651130) Reply frame received for 5 I0629 13:43:00.676898 6 log.go:172] (0xc001651130) Data frame received for 3 I0629 13:43:00.676926 6 log.go:172] (0xc0017a0aa0) (3) Data frame handling I0629 13:43:00.676949 6 log.go:172] (0xc0017a0aa0) (3) Data frame sent I0629 13:43:00.677798 6 log.go:172] (0xc001651130) Data frame received for 5 I0629 13:43:00.677824 6 log.go:172] (0xc00166ce60) (5) Data frame handling I0629 13:43:00.677846 6 log.go:172] (0xc001651130) Data frame received for 3 I0629 13:43:00.677869 6 log.go:172] (0xc0017a0aa0) (3) Data frame handling I0629 13:43:00.679498 6 log.go:172] (0xc001651130) Data frame received for 1 I0629 13:43:00.679521 6 log.go:172] (0xc0017a0a00) (1) Data frame handling I0629 13:43:00.679533 6 log.go:172] (0xc0017a0a00) (1) Data frame sent I0629 13:43:00.679625 6 log.go:172] (0xc001651130) (0xc0017a0a00) Stream removed, broadcasting: 1 I0629 13:43:00.679662 6 log.go:172] (0xc001651130) Go away received I0629 13:43:00.679751 6 log.go:172] (0xc001651130) (0xc0017a0a00) Stream removed, broadcasting: 1 I0629 13:43:00.679775 6 log.go:172] (0xc001651130) (0xc0017a0aa0) Stream removed, broadcasting: 3 I0629 13:43:00.679788 6 log.go:172] (0xc001651130) (0xc00166ce60) Stream removed, broadcasting: 5 Jun 29 13:43:00.679: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:43:00.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-98" for this suite. Jun 29 13:43:24.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:43:24.810: INFO: namespace pod-network-test-98 deletion completed in 24.125937093s • [SLOW TEST:48.717 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:43:24.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-8aaff46a-5b43-4937-936e-fb71cc4b9114 STEP: Creating a pod to test consume configMaps Jun 29 13:43:24.925: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6" in namespace "configmap-4977" to be "success or failure" Jun 29 13:43:24.941: INFO: Pod "pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.596926ms Jun 29 13:43:26.946: INFO: Pod "pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020387989s Jun 29 13:43:28.951: INFO: Pod "pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025572169s STEP: Saw pod success Jun 29 13:43:28.951: INFO: Pod "pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6" satisfied condition "success or failure" Jun 29 13:43:28.955: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6 container configmap-volume-test: STEP: delete the pod Jun 29 13:43:28.992: INFO: Waiting for pod pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6 to disappear Jun 29 13:43:29.002: INFO: Pod pod-configmaps-fd500dc0-d078-43f8-8f65-aa0fcb5a8ad6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:43:29.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4977" for this suite. Jun 29 13:43:35.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:43:35.104: INFO: namespace configmap-4977 deletion completed in 6.098676107s • [SLOW TEST:10.293 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:43:35.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-95f1a7f3-e1c3-4f79-b9cc-56973a926bff STEP: Creating a pod to test consume configMaps Jun 29 13:43:35.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a" in namespace "configmap-2697" to be "success or failure" Jun 29 13:43:35.176: INFO: Pod "pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.758156ms Jun 29 13:43:37.238: INFO: Pod "pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065431501s Jun 29 13:43:39.240: INFO: Pod "pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068065983s STEP: Saw pod success Jun 29 13:43:39.240: INFO: Pod "pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a" satisfied condition "success or failure" Jun 29 13:43:39.242: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a container configmap-volume-test: STEP: delete the pod Jun 29 13:43:39.273: INFO: Waiting for pod pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a to disappear Jun 29 13:43:39.290: INFO: Pod pod-configmaps-d6db3bda-c00a-4a1e-b875-71ab6794cc9a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:43:39.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2697" for this suite. Jun 29 13:43:45.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:43:45.387: INFO: namespace configmap-2697 deletion completed in 6.094661273s • [SLOW TEST:10.282 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:43:45.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3bf49418-2a04-4404-ad69-d9c6b2e38ae9 STEP: Creating a pod to test consume configMaps Jun 29 13:43:45.490: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc" in namespace "projected-9728" to be "success or failure" Jun 29 13:43:45.493: INFO: Pod "pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.980247ms Jun 29 13:43:47.496: INFO: Pod "pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006106173s Jun 29 13:43:49.500: INFO: Pod "pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010074188s STEP: Saw pod success Jun 29 13:43:49.500: INFO: Pod "pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc" satisfied condition "success or failure" Jun 29 13:43:49.502: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc container projected-configmap-volume-test: STEP: delete the pod Jun 29 13:43:49.587: INFO: Waiting for pod pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc to disappear Jun 29 13:43:49.753: INFO: Pod pod-projected-configmaps-ea2f44f6-9499-4ed2-ad58-f9d4c3aa58bc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:43:49.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9728" for this suite. Jun 29 13:43:55.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:43:55.852: INFO: namespace projected-9728 deletion completed in 6.096076807s • [SLOW TEST:10.465 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:43:55.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:43:55.946: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:43:59.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8829" for this suite. Jun 29 13:44:40.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:44:40.122: INFO: namespace pods-8829 deletion completed in 40.120697411s • [SLOW TEST:44.270 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:44:40.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 29 13:44:40.720: INFO: created pod pod-service-account-defaultsa Jun 29 13:44:40.720: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 29 13:44:40.750: INFO: created pod pod-service-account-mountsa Jun 29 13:44:40.750: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 29 13:44:40.754: INFO: created pod pod-service-account-nomountsa Jun 29 13:44:40.754: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 29 13:44:40.783: INFO: created pod pod-service-account-defaultsa-mountspec Jun 29 13:44:40.783: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 29 13:44:40.808: INFO: created pod pod-service-account-mountsa-mountspec Jun 29 13:44:40.808: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 29 13:44:40.820: INFO: created pod pod-service-account-nomountsa-mountspec Jun 29 13:44:40.820: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 29 13:44:40.898: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 29 13:44:40.898: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 29 13:44:40.938: INFO: created pod pod-service-account-mountsa-nomountspec Jun 29 13:44:40.938: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 29 13:44:41.031: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 29 13:44:41.031: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:44:41.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5840" for this suite. Jun 29 13:45:11.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:45:11.199: INFO: namespace svcaccounts-5840 deletion completed in 30.164603457s • [SLOW TEST:31.077 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:45:11.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 13:45:11.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7694' Jun 29 13:45:11.361: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 29 13:45:11.361: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 29 13:45:11.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7694' Jun 29 13:45:11.515: INFO: stderr: "" Jun 29 13:45:11.515: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:45:11.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7694" for this suite. Jun 29 13:45:17.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:45:17.611: INFO: namespace kubectl-7694 deletion completed in 6.092173863s • [SLOW TEST:6.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:45:17.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-fsk8 STEP: Creating a pod to test atomic-volume-subpath Jun 29 13:45:17.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fsk8" in namespace "subpath-6707" to be "success or failure" Jun 29 13:45:17.706: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611774ms Jun 29 13:45:19.711: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007108294s Jun 29 13:45:21.715: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 4.011285818s Jun 29 13:45:23.719: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 6.014884039s Jun 29 13:45:25.722: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 8.01835348s Jun 29 13:45:27.726: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 10.02240498s Jun 29 13:45:29.730: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 12.026526327s Jun 29 13:45:31.735: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 14.031039305s Jun 29 13:45:33.739: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 16.034793248s Jun 29 13:45:35.743: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 18.039077014s Jun 29 13:45:37.746: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 20.042418029s Jun 29 13:45:39.751: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Running", Reason="", readiness=true. Elapsed: 22.047227318s Jun 29 13:45:41.755: INFO: Pod "pod-subpath-test-projected-fsk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051568986s STEP: Saw pod success Jun 29 13:45:41.755: INFO: Pod "pod-subpath-test-projected-fsk8" satisfied condition "success or failure" Jun 29 13:45:41.759: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-fsk8 container test-container-subpath-projected-fsk8: STEP: delete the pod Jun 29 13:45:41.785: INFO: Waiting for pod pod-subpath-test-projected-fsk8 to disappear Jun 29 13:45:41.789: INFO: Pod pod-subpath-test-projected-fsk8 no longer exists STEP: Deleting pod pod-subpath-test-projected-fsk8 Jun 29 13:45:41.789: INFO: Deleting pod "pod-subpath-test-projected-fsk8" in namespace "subpath-6707" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:45:41.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6707" for this suite. Jun 29 13:45:47.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:45:47.898: INFO: namespace subpath-6707 deletion completed in 6.102997842s • [SLOW TEST:30.287 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:45:47.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 13:45:54.057: INFO: DNS probes using dns-3631/dns-test-54e77230-6b6e-4774-b4b6-12085f20dd60 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:45:54.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3631" for this suite. Jun 29 13:46:00.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:46:00.219: INFO: namespace dns-3631 deletion completed in 6.109116981s • [SLOW TEST:12.321 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:46:00.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 29 13:46:04.876: INFO: Successfully updated pod "labelsupdate310a6877-22ba-4522-88b2-d8b772a46856" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:46:06.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6680" for this suite. Jun 29 13:46:30.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:46:30.986: INFO: namespace downward-api-6680 deletion completed in 24.078823176s • [SLOW TEST:30.767 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:46:30.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:46:33.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec" in namespace "projected-4158" to be "success or failure" Jun 29 13:46:33.477: INFO: Pod "downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec": Phase="Pending", Reason="", readiness=false. Elapsed: 110.829346ms Jun 29 13:46:35.558: INFO: Pod "downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191689127s Jun 29 13:46:37.562: INFO: Pod "downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195906619s STEP: Saw pod success Jun 29 13:46:37.562: INFO: Pod "downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec" satisfied condition "success or failure" Jun 29 13:46:37.566: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec container client-container: STEP: delete the pod Jun 29 13:46:37.588: INFO: Waiting for pod downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec to disappear Jun 29 13:46:37.629: INFO: Pod downwardapi-volume-97f56770-1f2c-4f76-9659-5fa898e298ec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:46:37.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4158" for this suite. Jun 29 13:46:43.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:46:43.759: INFO: namespace projected-4158 deletion completed in 6.126816073s • [SLOW TEST:12.773 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:46:43.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 29 13:46:43.811: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 29 13:46:43.843: INFO: Waiting for terminating namespaces to be deleted... Jun 29 13:46:43.846: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 29 13:46:43.850: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.850: INFO: Container kindnet-cni ready: true, restart count 4 Jun 29 13:46:43.850: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.850: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 13:46:43.850: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 29 13:46:43.855: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.855: INFO: Container kindnet-cni ready: true, restart count 4 Jun 29 13:46:43.855: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.855: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 13:46:43.855: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.855: INFO: Container coredns ready: true, restart count 0 Jun 29 13:46:43.855: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 29 13:46:43.855: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161d07c9320f2b0d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:46:44.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7134" for this suite. Jun 29 13:46:50.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:46:50.974: INFO: namespace sched-pred-7134 deletion completed in 6.097562697s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.215 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:46:50.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 29 13:46:51.052: INFO: Waiting up to 5m0s for pod "client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7" in namespace "containers-5954" to be "success or failure" Jun 29 13:46:51.073: INFO: Pod "client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.411461ms Jun 29 13:46:53.077: INFO: Pod "client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024257535s Jun 29 13:46:55.081: INFO: Pod "client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028732791s STEP: Saw pod success Jun 29 13:46:55.081: INFO: Pod "client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7" satisfied condition "success or failure" Jun 29 13:46:55.084: INFO: Trying to get logs from node iruya-worker pod client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7 container test-container: STEP: delete the pod Jun 29 13:46:55.106: INFO: Waiting for pod client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7 to disappear Jun 29 13:46:55.145: INFO: Pod client-containers-91117213-d4fe-436a-aa5a-55a3a282d2c7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:46:55.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5954" for this suite. Jun 29 13:47:01.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:47:01.280: INFO: namespace containers-5954 deletion completed in 6.131527454s • [SLOW TEST:10.305 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:47:01.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 29 13:47:01.394: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 29 13:47:02.298: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 29 13:47:04.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 13:47:06.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729035222, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 13:47:09.326: INFO: Waited 699.873055ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:47:09.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5221" for this suite. Jun 29 13:47:15.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:47:15.953: INFO: namespace aggregator-5221 deletion completed in 6.152451783s • [SLOW TEST:14.673 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:47:15.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 29 13:47:16.762: INFO: Pod name wrapped-volume-race-423b5749-5c44-4b73-bb5f-ae5ed5f2a468: Found 0 pods out of 5 Jun 29 13:47:21.770: INFO: Pod name wrapped-volume-race-423b5749-5c44-4b73-bb5f-ae5ed5f2a468: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-423b5749-5c44-4b73-bb5f-ae5ed5f2a468 in namespace emptydir-wrapper-7197, will wait for the garbage collector to delete the pods Jun 29 13:47:35.863: INFO: Deleting ReplicationController wrapped-volume-race-423b5749-5c44-4b73-bb5f-ae5ed5f2a468 took: 17.834487ms Jun 29 13:47:36.163: INFO: Terminating ReplicationController wrapped-volume-race-423b5749-5c44-4b73-bb5f-ae5ed5f2a468 pods took: 300.303706ms STEP: Creating RC which spawns configmap-volume pods Jun 29 13:48:22.694: INFO: Pod name wrapped-volume-race-d1a06159-cc58-49a1-99c5-be6d3afabad8: Found 0 pods out of 5 Jun 29 13:48:27.702: INFO: Pod name wrapped-volume-race-d1a06159-cc58-49a1-99c5-be6d3afabad8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d1a06159-cc58-49a1-99c5-be6d3afabad8 in namespace emptydir-wrapper-7197, will wait for the garbage collector to delete the pods Jun 29 13:48:41.783: INFO: Deleting ReplicationController wrapped-volume-race-d1a06159-cc58-49a1-99c5-be6d3afabad8 took: 7.462881ms Jun 29 13:48:42.083: INFO: Terminating ReplicationController wrapped-volume-race-d1a06159-cc58-49a1-99c5-be6d3afabad8 pods took: 300.274668ms STEP: Creating RC which spawns configmap-volume pods Jun 29 13:49:23.367: INFO: Pod name wrapped-volume-race-1a4d845f-2337-43f7-8c01-f4c78682cb3a: Found 0 pods out of 5 Jun 29 13:49:28.384: INFO: Pod name wrapped-volume-race-1a4d845f-2337-43f7-8c01-f4c78682cb3a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1a4d845f-2337-43f7-8c01-f4c78682cb3a in namespace emptydir-wrapper-7197, will wait for the garbage collector to delete the pods Jun 29 13:49:42.474: INFO: Deleting ReplicationController wrapped-volume-race-1a4d845f-2337-43f7-8c01-f4c78682cb3a took: 12.654983ms Jun 29 13:49:42.774: INFO: Terminating ReplicationController wrapped-volume-race-1a4d845f-2337-43f7-8c01-f4c78682cb3a pods took: 300.361587ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:50:23.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7197" for this suite. Jun 29 13:50:31.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:50:31.941: INFO: namespace emptydir-wrapper-7197 deletion completed in 8.135680564s • [SLOW TEST:195.987 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:50:31.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 13:50:32.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4" in namespace "projected-3275" to be "success or failure" Jun 29 13:50:32.052: INFO: Pod "downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.332026ms Jun 29 13:50:34.056: INFO: Pod "downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024085275s Jun 29 13:50:36.060: INFO: Pod "downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028698277s STEP: Saw pod success Jun 29 13:50:36.060: INFO: Pod "downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4" satisfied condition "success or failure" Jun 29 13:50:36.064: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4 container client-container: STEP: delete the pod Jun 29 13:50:36.101: INFO: Waiting for pod downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4 to disappear Jun 29 13:50:36.104: INFO: Pod downwardapi-volume-81040741-8fd9-4a08-9104-f1e49b51ded4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:50:36.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3275" for this suite. Jun 29 13:50:42.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:50:42.208: INFO: namespace projected-3275 deletion completed in 6.100113357s • [SLOW TEST:10.266 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:50:42.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-06066ff5-6116-4cea-b611-2236667558ec STEP: Creating a pod to test consume configMaps Jun 29 13:50:42.311: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3" in namespace "projected-9940" to be "success or failure" Jun 29 13:50:42.314: INFO: Pod "pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425272ms Jun 29 13:50:44.319: INFO: Pod "pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007982571s Jun 29 13:50:46.323: INFO: Pod "pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012818993s STEP: Saw pod success Jun 29 13:50:46.324: INFO: Pod "pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3" satisfied condition "success or failure" Jun 29 13:50:46.326: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3 container projected-configmap-volume-test: STEP: delete the pod Jun 29 13:50:46.345: INFO: Waiting for pod pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3 to disappear Jun 29 13:50:46.350: INFO: Pod pod-projected-configmaps-9bfa6a49-ac44-47b5-b2d5-bb5e8975c4b3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:50:46.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9940" for this suite. Jun 29 13:50:52.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:50:52.472: INFO: namespace projected-9940 deletion completed in 6.119028111s • [SLOW TEST:10.263 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:50:52.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 29 13:50:57.593: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:50:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4778" for this suite. Jun 29 13:51:20.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:51:20.716: INFO: namespace replicaset-4778 deletion completed in 22.101839478s • [SLOW TEST:28.244 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:51:20.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 29 13:51:28.842: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:28.897: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:30.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:30.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:32.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:32.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:34.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:34.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:36.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:36.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:38.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:38.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:40.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:40.902: INFO: Pod pod-with-prestop-http-hook still exists Jun 29 13:51:42.898: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 29 13:51:42.901: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:51:42.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3836" for this suite. Jun 29 13:52:04.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:52:04.996: INFO: namespace container-lifecycle-hook-3836 deletion completed in 22.081532614s • [SLOW TEST:44.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:52:04.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7794 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 29 13:52:05.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 29 13:52:27.221: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.15:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:52:27.221: INFO: >>> kubeConfig: /root/.kube/config I0629 13:52:27.251828 6 log.go:172] (0xc001c16420) (0xc0011281e0) Create stream I0629 13:52:27.251862 6 log.go:172] (0xc001c16420) (0xc0011281e0) Stream added, broadcasting: 1 I0629 13:52:27.254379 6 log.go:172] (0xc001c16420) Reply frame received for 1 I0629 13:52:27.254444 6 log.go:172] (0xc001c16420) (0xc001128780) Create stream I0629 13:52:27.254456 6 log.go:172] (0xc001c16420) (0xc001128780) Stream added, broadcasting: 3 I0629 13:52:27.255367 6 log.go:172] (0xc001c16420) Reply frame received for 3 I0629 13:52:27.255410 6 log.go:172] (0xc001c16420) (0xc002258b40) Create stream I0629 13:52:27.255426 6 log.go:172] (0xc001c16420) (0xc002258b40) Stream added, broadcasting: 5 I0629 13:52:27.256479 6 log.go:172] (0xc001c16420) Reply frame received for 5 I0629 13:52:27.350015 6 log.go:172] (0xc001c16420) Data frame received for 3 I0629 13:52:27.350047 6 log.go:172] (0xc001128780) (3) Data frame handling I0629 13:52:27.350061 6 log.go:172] (0xc001128780) (3) Data frame sent I0629 13:52:27.350108 6 log.go:172] (0xc001c16420) Data frame received for 3 I0629 13:52:27.350121 6 log.go:172] (0xc001128780) (3) Data frame handling I0629 13:52:27.350167 6 log.go:172] (0xc001c16420) Data frame received for 5 I0629 13:52:27.350187 6 log.go:172] (0xc002258b40) (5) Data frame handling I0629 13:52:27.351759 6 log.go:172] (0xc001c16420) Data frame received for 1 I0629 13:52:27.351788 6 log.go:172] (0xc0011281e0) (1) Data frame handling I0629 13:52:27.351833 6 log.go:172] (0xc0011281e0) (1) Data frame sent I0629 13:52:27.351855 6 log.go:172] (0xc001c16420) (0xc0011281e0) Stream removed, broadcasting: 1 I0629 13:52:27.351874 6 log.go:172] (0xc001c16420) Go away received I0629 13:52:27.352027 6 log.go:172] (0xc001c16420) (0xc0011281e0) Stream removed, broadcasting: 1 I0629 13:52:27.352048 6 log.go:172] (0xc001c16420) (0xc001128780) Stream removed, broadcasting: 3 I0629 13:52:27.352060 6 log.go:172] (0xc001c16420) (0xc002258b40) Stream removed, broadcasting: 5 Jun 29 13:52:27.352: INFO: Found all expected endpoints: [netserver-0] Jun 29 13:52:27.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.67:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 13:52:27.355: INFO: >>> kubeConfig: /root/.kube/config I0629 13:52:27.385526 6 log.go:172] (0xc001c171e0) (0xc001128dc0) Create stream I0629 13:52:27.385574 6 log.go:172] (0xc001c171e0) (0xc001128dc0) Stream added, broadcasting: 1 I0629 13:52:27.387257 6 log.go:172] (0xc001c171e0) Reply frame received for 1 I0629 13:52:27.387290 6 log.go:172] (0xc001c171e0) (0xc002258d20) Create stream I0629 13:52:27.387302 6 log.go:172] (0xc001c171e0) (0xc002258d20) Stream added, broadcasting: 3 I0629 13:52:27.388078 6 log.go:172] (0xc001c171e0) Reply frame received for 3 I0629 13:52:27.388114 6 log.go:172] (0xc001c171e0) (0xc002258e60) Create stream I0629 13:52:27.388132 6 log.go:172] (0xc001c171e0) (0xc002258e60) Stream added, broadcasting: 5 I0629 13:52:27.389007 6 log.go:172] (0xc001c171e0) Reply frame received for 5 I0629 13:52:27.579700 6 log.go:172] (0xc001c171e0) Data frame received for 5 I0629 13:52:27.579735 6 log.go:172] (0xc002258e60) (5) Data frame handling I0629 13:52:27.579756 6 log.go:172] (0xc001c171e0) Data frame received for 3 I0629 13:52:27.579771 6 log.go:172] (0xc002258d20) (3) Data frame handling I0629 13:52:27.579784 6 log.go:172] (0xc002258d20) (3) Data frame sent I0629 13:52:27.579794 6 log.go:172] (0xc001c171e0) Data frame received for 3 I0629 13:52:27.579803 6 log.go:172] (0xc002258d20) (3) Data frame handling I0629 13:52:27.580799 6 log.go:172] (0xc001c171e0) Data frame received for 1 I0629 13:52:27.580817 6 log.go:172] (0xc001128dc0) (1) Data frame handling I0629 13:52:27.580832 6 log.go:172] (0xc001128dc0) (1) Data frame sent I0629 13:52:27.580845 6 log.go:172] (0xc001c171e0) (0xc001128dc0) Stream removed, broadcasting: 1 I0629 13:52:27.580857 6 log.go:172] (0xc001c171e0) Go away received I0629 13:52:27.580940 6 log.go:172] (0xc001c171e0) (0xc001128dc0) Stream removed, broadcasting: 1 I0629 13:52:27.580956 6 log.go:172] (0xc001c171e0) (0xc002258d20) Stream removed, broadcasting: 3 I0629 13:52:27.580962 6 log.go:172] (0xc001c171e0) (0xc002258e60) Stream removed, broadcasting: 5 Jun 29 13:52:27.580: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:52:27.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7794" for this suite. Jun 29 13:52:49.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:52:49.738: INFO: namespace pod-network-test-7794 deletion completed in 22.145354162s • [SLOW TEST:44.742 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:52:49.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-329 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 29 13:52:49.844: INFO: Found 0 stateful pods, waiting for 3 Jun 29 13:52:59.848: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 13:52:59.848: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 13:52:59.848: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 29 13:53:09.853: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 13:53:09.853: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 13:53:09.853: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 29 13:53:09.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 13:53:12.902: INFO: stderr: "I0629 13:53:12.729381 1317 log.go:172] (0xc000b88420) (0xc0007d2780) Create stream\nI0629 13:53:12.729416 1317 log.go:172] (0xc000b88420) (0xc0007d2780) Stream added, broadcasting: 1\nI0629 13:53:12.731551 1317 log.go:172] (0xc000b88420) Reply frame received for 1\nI0629 13:53:12.731580 1317 log.go:172] (0xc000b88420) (0xc0007d2820) Create stream\nI0629 13:53:12.731587 1317 log.go:172] (0xc000b88420) (0xc0007d2820) Stream added, broadcasting: 3\nI0629 13:53:12.732637 1317 log.go:172] (0xc000b88420) Reply frame received for 3\nI0629 13:53:12.732678 1317 log.go:172] (0xc000b88420) (0xc000b840a0) Create stream\nI0629 13:53:12.732694 1317 log.go:172] (0xc000b88420) (0xc000b840a0) Stream added, broadcasting: 5\nI0629 13:53:12.733814 1317 log.go:172] (0xc000b88420) Reply frame received for 5\nI0629 13:53:12.854498 1317 log.go:172] (0xc000b88420) Data frame received for 5\nI0629 13:53:12.854524 1317 log.go:172] (0xc000b840a0) (5) Data frame handling\nI0629 13:53:12.854537 1317 log.go:172] (0xc000b840a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 13:53:12.889595 1317 log.go:172] (0xc000b88420) Data frame received for 3\nI0629 13:53:12.889624 1317 log.go:172] (0xc0007d2820) (3) Data frame handling\nI0629 13:53:12.889651 1317 log.go:172] (0xc0007d2820) (3) Data frame sent\nI0629 13:53:12.889674 1317 log.go:172] (0xc000b88420) Data frame received for 3\nI0629 13:53:12.889685 1317 log.go:172] (0xc0007d2820) (3) Data frame handling\nI0629 13:53:12.889835 1317 log.go:172] (0xc000b88420) Data frame received for 5\nI0629 13:53:12.889852 1317 log.go:172] (0xc000b840a0) (5) Data frame handling\nI0629 13:53:12.891760 1317 log.go:172] (0xc000b88420) Data frame received for 1\nI0629 13:53:12.891776 1317 log.go:172] (0xc0007d2780) (1) Data frame handling\nI0629 13:53:12.891785 1317 log.go:172] (0xc0007d2780) (1) Data frame sent\nI0629 13:53:12.891811 1317 log.go:172] (0xc000b88420) (0xc0007d2780) Stream removed, broadcasting: 1\nI0629 13:53:12.891837 1317 log.go:172] (0xc000b88420) Go away received\nI0629 13:53:12.892261 1317 log.go:172] (0xc000b88420) (0xc0007d2780) Stream removed, broadcasting: 1\nI0629 13:53:12.892288 1317 log.go:172] (0xc000b88420) (0xc0007d2820) Stream removed, broadcasting: 3\nI0629 13:53:12.892303 1317 log.go:172] (0xc000b88420) (0xc000b840a0) Stream removed, broadcasting: 5\n" Jun 29 13:53:12.902: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 13:53:12.902: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 29 13:53:22.933: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 29 13:53:33.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 13:53:33.283: INFO: stderr: "I0629 13:53:33.182736 1351 log.go:172] (0xc000116dc0) (0xc00037a820) Create stream\nI0629 13:53:33.182792 1351 log.go:172] (0xc000116dc0) (0xc00037a820) Stream added, broadcasting: 1\nI0629 13:53:33.185703 1351 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0629 13:53:33.185803 1351 log.go:172] (0xc000116dc0) (0xc000970000) Create stream\nI0629 13:53:33.185864 1351 log.go:172] (0xc000116dc0) (0xc000970000) Stream added, broadcasting: 3\nI0629 13:53:33.187568 1351 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0629 13:53:33.187617 1351 log.go:172] (0xc000116dc0) (0xc0009700a0) Create stream\nI0629 13:53:33.187631 1351 log.go:172] (0xc000116dc0) (0xc0009700a0) Stream added, broadcasting: 5\nI0629 13:53:33.188620 1351 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0629 13:53:33.273847 1351 log.go:172] (0xc000116dc0) Data frame received for 5\nI0629 13:53:33.273888 1351 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0629 13:53:33.273906 1351 log.go:172] (0xc0009700a0) (5) Data frame sent\nI0629 13:53:33.273916 1351 log.go:172] (0xc000116dc0) Data frame received for 5\nI0629 13:53:33.273926 1351 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0629 13:53:33.273937 1351 log.go:172] (0xc000116dc0) Data frame received for 3\nI0629 13:53:33.273945 1351 log.go:172] (0xc000970000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 13:53:33.273951 1351 log.go:172] (0xc000970000) (3) Data frame sent\nI0629 13:53:33.274058 1351 log.go:172] (0xc000116dc0) Data frame received for 3\nI0629 13:53:33.274102 1351 log.go:172] (0xc000970000) (3) Data frame handling\nI0629 13:53:33.275629 1351 log.go:172] (0xc000116dc0) Data frame received for 1\nI0629 13:53:33.275665 1351 log.go:172] (0xc00037a820) (1) Data frame handling\nI0629 13:53:33.275695 1351 log.go:172] (0xc00037a820) (1) Data frame sent\nI0629 13:53:33.275722 1351 log.go:172] (0xc000116dc0) (0xc00037a820) Stream removed, broadcasting: 1\nI0629 13:53:33.275869 1351 log.go:172] (0xc000116dc0) Go away received\nI0629 13:53:33.276196 1351 log.go:172] (0xc000116dc0) (0xc00037a820) Stream removed, broadcasting: 1\nI0629 13:53:33.276224 1351 log.go:172] (0xc000116dc0) (0xc000970000) Stream removed, broadcasting: 3\nI0629 13:53:33.276243 1351 log.go:172] (0xc000116dc0) (0xc0009700a0) Stream removed, broadcasting: 5\n" Jun 29 13:53:33.283: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 13:53:33.283: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 13:53:43.301: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update Jun 29 13:53:43.301: INFO: Waiting for Pod statefulset-329/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 13:53:43.302: INFO: Waiting for Pod statefulset-329/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 13:53:53.310: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update Jun 29 13:53:53.310: INFO: Waiting for Pod statefulset-329/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 29 13:54:03.310: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update STEP: Rolling back to a previous revision Jun 29 13:54:13.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 13:54:13.621: INFO: stderr: "I0629 13:54:13.447379 1371 log.go:172] (0xc00013adc0) (0xc0004e6820) Create stream\nI0629 13:54:13.447451 1371 log.go:172] (0xc00013adc0) (0xc0004e6820) Stream added, broadcasting: 1\nI0629 13:54:13.450598 1371 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0629 13:54:13.450643 1371 log.go:172] (0xc00013adc0) (0xc0004e6000) Create stream\nI0629 13:54:13.450655 1371 log.go:172] (0xc00013adc0) (0xc0004e6000) Stream added, broadcasting: 3\nI0629 13:54:13.451313 1371 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0629 13:54:13.451347 1371 log.go:172] (0xc00013adc0) (0xc000532280) Create stream\nI0629 13:54:13.451358 1371 log.go:172] (0xc00013adc0) (0xc000532280) Stream added, broadcasting: 5\nI0629 13:54:13.452126 1371 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0629 13:54:13.550366 1371 log.go:172] (0xc00013adc0) Data frame received for 5\nI0629 13:54:13.550389 1371 log.go:172] (0xc000532280) (5) Data frame handling\nI0629 13:54:13.550410 1371 log.go:172] (0xc000532280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 13:54:13.611277 1371 log.go:172] (0xc00013adc0) Data frame received for 3\nI0629 13:54:13.611320 1371 log.go:172] (0xc0004e6000) (3) Data frame handling\nI0629 13:54:13.611329 1371 log.go:172] (0xc0004e6000) (3) Data frame sent\nI0629 13:54:13.611335 1371 log.go:172] (0xc00013adc0) Data frame received for 3\nI0629 13:54:13.611340 1371 log.go:172] (0xc0004e6000) (3) Data frame handling\nI0629 13:54:13.611348 1371 log.go:172] (0xc00013adc0) Data frame received for 5\nI0629 13:54:13.611353 1371 log.go:172] (0xc000532280) (5) Data frame handling\nI0629 13:54:13.614155 1371 log.go:172] (0xc00013adc0) Data frame received for 1\nI0629 13:54:13.614187 1371 log.go:172] (0xc0004e6820) (1) Data frame handling\nI0629 13:54:13.614208 1371 log.go:172] (0xc0004e6820) (1) Data frame sent\nI0629 13:54:13.614225 1371 log.go:172] (0xc00013adc0) (0xc0004e6820) Stream removed, broadcasting: 1\nI0629 13:54:13.614254 1371 log.go:172] (0xc00013adc0) Go away received\nI0629 13:54:13.614731 1371 log.go:172] (0xc00013adc0) (0xc0004e6820) Stream removed, broadcasting: 1\nI0629 13:54:13.614763 1371 log.go:172] (0xc00013adc0) (0xc0004e6000) Stream removed, broadcasting: 3\nI0629 13:54:13.614776 1371 log.go:172] (0xc00013adc0) (0xc000532280) Stream removed, broadcasting: 5\n" Jun 29 13:54:13.621: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 13:54:13.621: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 13:54:23.656: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 29 13:54:33.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 13:54:33.939: INFO: stderr: "I0629 13:54:33.844003 1393 log.go:172] (0xc0006beb00) (0xc0007566e0) Create stream\nI0629 13:54:33.844069 1393 log.go:172] (0xc0006beb00) (0xc0007566e0) Stream added, broadcasting: 1\nI0629 13:54:33.846865 1393 log.go:172] (0xc0006beb00) Reply frame received for 1\nI0629 13:54:33.846936 1393 log.go:172] (0xc0006beb00) (0xc00033c320) Create stream\nI0629 13:54:33.846967 1393 log.go:172] (0xc0006beb00) (0xc00033c320) Stream added, broadcasting: 3\nI0629 13:54:33.847881 1393 log.go:172] (0xc0006beb00) Reply frame received for 3\nI0629 13:54:33.847920 1393 log.go:172] (0xc0006beb00) (0xc00033c3c0) Create stream\nI0629 13:54:33.847931 1393 log.go:172] (0xc0006beb00) (0xc00033c3c0) Stream added, broadcasting: 5\nI0629 13:54:33.848876 1393 log.go:172] (0xc0006beb00) Reply frame received for 5\nI0629 13:54:33.931808 1393 log.go:172] (0xc0006beb00) Data frame received for 5\nI0629 13:54:33.931839 1393 log.go:172] (0xc00033c3c0) (5) Data frame handling\nI0629 13:54:33.931891 1393 log.go:172] (0xc0006beb00) Data frame received for 3\nI0629 13:54:33.931930 1393 log.go:172] (0xc00033c320) (3) Data frame handling\nI0629 13:54:33.931944 1393 log.go:172] (0xc00033c320) (3) Data frame sent\nI0629 13:54:33.931955 1393 log.go:172] (0xc0006beb00) Data frame received for 3\nI0629 13:54:33.931963 1393 log.go:172] (0xc00033c320) (3) Data frame handling\nI0629 13:54:33.931992 1393 log.go:172] (0xc00033c3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 13:54:33.932007 1393 log.go:172] (0xc0006beb00) Data frame received for 5\nI0629 13:54:33.932016 1393 log.go:172] (0xc00033c3c0) (5) Data frame handling\nI0629 13:54:33.933914 1393 log.go:172] (0xc0006beb00) Data frame received for 1\nI0629 13:54:33.933941 1393 log.go:172] (0xc0007566e0) (1) Data frame handling\nI0629 13:54:33.933960 1393 log.go:172] (0xc0007566e0) (1) Data frame sent\nI0629 13:54:33.933985 1393 log.go:172] (0xc0006beb00) (0xc0007566e0) Stream removed, broadcasting: 1\nI0629 13:54:33.933999 1393 log.go:172] (0xc0006beb00) Go away received\nI0629 13:54:33.934445 1393 log.go:172] (0xc0006beb00) (0xc0007566e0) Stream removed, broadcasting: 1\nI0629 13:54:33.934472 1393 log.go:172] (0xc0006beb00) (0xc00033c320) Stream removed, broadcasting: 3\nI0629 13:54:33.934482 1393 log.go:172] (0xc0006beb00) (0xc00033c3c0) Stream removed, broadcasting: 5\n" Jun 29 13:54:33.939: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 13:54:33.939: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 13:54:44.007: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update Jun 29 13:54:44.007: INFO: Waiting for Pod statefulset-329/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 29 13:54:44.007: INFO: Waiting for Pod statefulset-329/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 29 13:54:54.016: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update Jun 29 13:54:54.016: INFO: Waiting for Pod statefulset-329/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 29 13:55:04.016: INFO: Waiting for StatefulSet statefulset-329/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 29 13:55:14.016: INFO: Deleting all statefulset in ns statefulset-329 Jun 29 13:55:14.019: INFO: Scaling statefulset ss2 to 0 Jun 29 13:55:34.037: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 13:55:34.040: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:55:34.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-329" for this suite. Jun 29 13:55:40.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:55:40.163: INFO: namespace statefulset-329 deletion completed in 6.101924782s • [SLOW TEST:170.424 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:55:40.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 13:55:40.277: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 29 13:55:45.282: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 29 13:55:45.282: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 29 13:55:45.302: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-263,SelfLink:/apis/apps/v1/namespaces/deployment-263/deployments/test-cleanup-deployment,UID:715ae332-052b-4ab0-bdd6-1c0f329c6315,ResourceVersion:19114699,Generation:1,CreationTimestamp:2020-06-29 13:55:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 29 13:55:45.324: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-263,SelfLink:/apis/apps/v1/namespaces/deployment-263/replicasets/test-cleanup-deployment-55bbcbc84c,UID:55b0db20-4410-4d0b-a1e8-22b5223e0a24,ResourceVersion:19114701,Generation:1,CreationTimestamp:2020-06-29 13:55:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 715ae332-052b-4ab0-bdd6-1c0f329c6315 0xc000b089c7 0xc000b089c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 13:55:45.324: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 29 13:55:45.324: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-263,SelfLink:/apis/apps/v1/namespaces/deployment-263/replicasets/test-cleanup-controller,UID:48f34e9a-7aea-4ad3-a7ef-3cbd81c4f5a6,ResourceVersion:19114700,Generation:1,CreationTimestamp:2020-06-29 13:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 715ae332-052b-4ab0-bdd6-1c0f329c6315 0xc000b08827 0xc000b08828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 29 13:55:45.392: INFO: Pod "test-cleanup-controller-vf2tl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vf2tl,GenerateName:test-cleanup-controller-,Namespace:deployment-263,SelfLink:/api/v1/namespaces/deployment-263/pods/test-cleanup-controller-vf2tl,UID:aeec52fb-9745-4841-ad39-44024f57e622,ResourceVersion:19114693,Generation:0,CreationTimestamp:2020-06-29 13:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 48f34e9a-7aea-4ad3-a7ef-3cbd81c4f5a6 0xc0024d35b7 0xc0024d35b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7jlpp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7jlpp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7jlpp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024d3660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024d3680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:55:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:55:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:55:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:55:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.75,StartTime:2020-06-29 13:55:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 13:55:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3d68f92ffbe0c4c0b713991b0e9ffbbd4fae83d6699a91ea03d388f14c6d2845}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 13:55:45.392: INFO: Pod "test-cleanup-deployment-55bbcbc84c-sv9dr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-sv9dr,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-263,SelfLink:/api/v1/namespaces/deployment-263/pods/test-cleanup-deployment-55bbcbc84c-sv9dr,UID:e3d6b5b9-4661-496f-bd6f-336cc0cc14fd,ResourceVersion:19114707,Generation:0,CreationTimestamp:2020-06-29 13:55:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 55b0db20-4410-4d0b-a1e8-22b5223e0a24 0xc0024d3767 0xc0024d3768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7jlpp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7jlpp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-7jlpp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024d37e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024d3800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 13:55:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:55:45.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-263" for this suite. Jun 29 13:55:51.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:55:51.516: INFO: namespace deployment-263 deletion completed in 6.087909193s • [SLOW TEST:11.353 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:55:51.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-d26w STEP: Creating a pod to test atomic-volume-subpath Jun 29 13:55:51.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d26w" in namespace "subpath-2332" to be "success or failure" Jun 29 13:55:51.703: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352915ms Jun 29 13:55:53.707: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006439985s Jun 29 13:55:55.710: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 4.009933517s Jun 29 13:55:57.714: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 6.013606101s Jun 29 13:55:59.718: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 8.017499463s Jun 29 13:56:01.770: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 10.070278784s Jun 29 13:56:03.775: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 12.074892306s Jun 29 13:56:05.779: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 14.079007076s Jun 29 13:56:07.783: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 16.083111084s Jun 29 13:56:09.787: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 18.08740535s Jun 29 13:56:11.800: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 20.099548649s Jun 29 13:56:13.804: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Running", Reason="", readiness=true. Elapsed: 22.104121648s Jun 29 13:56:15.820: INFO: Pod "pod-subpath-test-configmap-d26w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119820195s STEP: Saw pod success Jun 29 13:56:15.820: INFO: Pod "pod-subpath-test-configmap-d26w" satisfied condition "success or failure" Jun 29 13:56:15.825: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-d26w container test-container-subpath-configmap-d26w: STEP: delete the pod Jun 29 13:56:15.854: INFO: Waiting for pod pod-subpath-test-configmap-d26w to disappear Jun 29 13:56:15.864: INFO: Pod pod-subpath-test-configmap-d26w no longer exists STEP: Deleting pod pod-subpath-test-configmap-d26w Jun 29 13:56:15.864: INFO: Deleting pod "pod-subpath-test-configmap-d26w" in namespace "subpath-2332" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:56:15.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2332" for this suite. Jun 29 13:56:21.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:56:21.964: INFO: namespace subpath-2332 deletion completed in 6.095993354s • [SLOW TEST:30.447 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:56:21.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0629 13:56:52.606955 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 13:56:52.607: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:56:52.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-444" for this suite. Jun 29 13:56:58.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:56:58.702: INFO: namespace gc-444 deletion completed in 6.092846694s • [SLOW TEST:36.738 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:56:58.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 29 13:56:58.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5773' Jun 29 13:56:59.283: INFO: stderr: "" Jun 29 13:56:59.283: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 13:56:59.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5773' Jun 29 13:56:59.385: INFO: stderr: "" Jun 29 13:56:59.385: INFO: stdout: "update-demo-nautilus-g2rbl update-demo-nautilus-nnmn8 " Jun 29 13:56:59.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2rbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5773' Jun 29 13:56:59.503: INFO: stderr: "" Jun 29 13:56:59.503: INFO: stdout: "" Jun 29 13:56:59.503: INFO: update-demo-nautilus-g2rbl is created but not running Jun 29 13:57:04.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5773' Jun 29 13:57:04.609: INFO: stderr: "" Jun 29 13:57:04.609: INFO: stdout: "update-demo-nautilus-g2rbl update-demo-nautilus-nnmn8 " Jun 29 13:57:04.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2rbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5773' Jun 29 13:57:04.696: INFO: stderr: "" Jun 29 13:57:04.696: INFO: stdout: "true" Jun 29 13:57:04.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2rbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5773' Jun 29 13:57:04.792: INFO: stderr: "" Jun 29 13:57:04.792: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:57:04.792: INFO: validating pod update-demo-nautilus-g2rbl Jun 29 13:57:04.797: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:57:04.797: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:57:04.797: INFO: update-demo-nautilus-g2rbl is verified up and running Jun 29 13:57:04.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nnmn8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5773' Jun 29 13:57:04.894: INFO: stderr: "" Jun 29 13:57:04.894: INFO: stdout: "true" Jun 29 13:57:04.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nnmn8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5773' Jun 29 13:57:04.994: INFO: stderr: "" Jun 29 13:57:04.994: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:57:04.994: INFO: validating pod update-demo-nautilus-nnmn8 Jun 29 13:57:05.007: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:57:05.007: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:57:05.007: INFO: update-demo-nautilus-nnmn8 is verified up and running STEP: using delete to clean up resources Jun 29 13:57:05.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5773' Jun 29 13:57:05.142: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 13:57:05.142: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 29 13:57:05.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5773' Jun 29 13:57:05.245: INFO: stderr: "No resources found.\n" Jun 29 13:57:05.245: INFO: stdout: "" Jun 29 13:57:05.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 13:57:05.340: INFO: stderr: "" Jun 29 13:57:05.340: INFO: stdout: "update-demo-nautilus-g2rbl\nupdate-demo-nautilus-nnmn8\n" Jun 29 13:57:05.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5773' Jun 29 13:57:05.942: INFO: stderr: "No resources found.\n" Jun 29 13:57:05.942: INFO: stdout: "" Jun 29 13:57:05.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 13:57:06.040: INFO: stderr: "" Jun 29 13:57:06.040: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:57:06.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5773" for this suite. Jun 29 13:57:26.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:57:26.296: INFO: namespace kubectl-5773 deletion completed in 20.251853293s • [SLOW TEST:27.593 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:57:26.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:57:52.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6138" for this suite. Jun 29 13:57:58.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:57:58.617: INFO: namespace namespaces-6138 deletion completed in 6.125799423s STEP: Destroying namespace "nsdeletetest-3189" for this suite. Jun 29 13:57:58.619: INFO: Namespace nsdeletetest-3189 was already deleted STEP: Destroying namespace "nsdeletetest-1878" for this suite. Jun 29 13:58:04.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:58:04.727: INFO: namespace nsdeletetest-1878 deletion completed in 6.10723026s • [SLOW TEST:38.431 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:58:04.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-aca63b5d-577a-47cf-a6f8-cccba9b098d3 STEP: Creating configMap with name cm-test-opt-upd-e1d7c80b-bf7c-4a49-bb57-8f36e0420f42 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-aca63b5d-577a-47cf-a6f8-cccba9b098d3 STEP: Updating configmap cm-test-opt-upd-e1d7c80b-bf7c-4a49-bb57-8f36e0420f42 STEP: Creating configMap with name cm-test-opt-create-690c87f0-4353-46f0-a219-a53252d6c0c5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 13:59:23.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8266" for this suite. Jun 29 13:59:45.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 13:59:45.391: INFO: namespace projected-8266 deletion completed in 22.089811581s • [SLOW TEST:100.664 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 13:59:45.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 29 13:59:45.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1340' Jun 29 13:59:45.766: INFO: stderr: "" Jun 29 13:59:45.766: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 13:59:45.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1340' Jun 29 13:59:45.906: INFO: stderr: "" Jun 29 13:59:45.906: INFO: stdout: "update-demo-nautilus-7gqwf update-demo-nautilus-gx5hj " Jun 29 13:59:45.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gqwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 13:59:45.999: INFO: stderr: "" Jun 29 13:59:45.999: INFO: stdout: "" Jun 29 13:59:45.999: INFO: update-demo-nautilus-7gqwf is created but not running Jun 29 13:59:50.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1340' Jun 29 13:59:51.111: INFO: stderr: "" Jun 29 13:59:51.111: INFO: stdout: "update-demo-nautilus-7gqwf update-demo-nautilus-gx5hj " Jun 29 13:59:51.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gqwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 13:59:51.208: INFO: stderr: "" Jun 29 13:59:51.208: INFO: stdout: "true" Jun 29 13:59:51.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gqwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 13:59:51.305: INFO: stderr: "" Jun 29 13:59:51.305: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:59:51.305: INFO: validating pod update-demo-nautilus-7gqwf Jun 29 13:59:51.309: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:59:51.309: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:59:51.309: INFO: update-demo-nautilus-7gqwf is verified up and running Jun 29 13:59:51.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx5hj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 13:59:51.402: INFO: stderr: "" Jun 29 13:59:51.402: INFO: stdout: "true" Jun 29 13:59:51.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx5hj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 13:59:51.489: INFO: stderr: "" Jun 29 13:59:51.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 29 13:59:51.489: INFO: validating pod update-demo-nautilus-gx5hj Jun 29 13:59:51.493: INFO: got data: { "image": "nautilus.jpg" } Jun 29 13:59:51.493: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 29 13:59:51.493: INFO: update-demo-nautilus-gx5hj is verified up and running STEP: rolling-update to new replication controller Jun 29 13:59:51.496: INFO: scanned /root for discovery docs: Jun 29 13:59:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1340' Jun 29 14:00:14.231: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 29 14:00:14.231: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 29 14:00:14.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1340' Jun 29 14:00:14.330: INFO: stderr: "" Jun 29 14:00:14.330: INFO: stdout: "update-demo-kitten-495f7 update-demo-kitten-dpnp7 " Jun 29 14:00:14.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-495f7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 14:00:14.428: INFO: stderr: "" Jun 29 14:00:14.428: INFO: stdout: "true" Jun 29 14:00:14.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-495f7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 14:00:14.518: INFO: stderr: "" Jun 29 14:00:14.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 29 14:00:14.518: INFO: validating pod update-demo-kitten-495f7 Jun 29 14:00:14.534: INFO: got data: { "image": "kitten.jpg" } Jun 29 14:00:14.534: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 29 14:00:14.534: INFO: update-demo-kitten-495f7 is verified up and running Jun 29 14:00:14.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dpnp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 14:00:14.635: INFO: stderr: "" Jun 29 14:00:14.635: INFO: stdout: "true" Jun 29 14:00:14.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dpnp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1340' Jun 29 14:00:14.728: INFO: stderr: "" Jun 29 14:00:14.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 29 14:00:14.728: INFO: validating pod update-demo-kitten-dpnp7 Jun 29 14:00:14.742: INFO: got data: { "image": "kitten.jpg" } Jun 29 14:00:14.742: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 29 14:00:14.742: INFO: update-demo-kitten-dpnp7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:00:14.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1340" for this suite. Jun 29 14:00:36.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:00:36.841: INFO: namespace kubectl-1340 deletion completed in 22.095039587s • [SLOW TEST:51.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:00:36.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-37e0a62f-7662-4f4f-af2b-0da335833cce STEP: Creating a pod to test consume configMaps Jun 29 14:00:36.937: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5" in namespace "projected-6663" to be "success or failure" Jun 29 14:00:36.957: INFO: Pod "pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.401697ms Jun 29 14:00:38.961: INFO: Pod "pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024149241s Jun 29 14:00:40.966: INFO: Pod "pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028790578s STEP: Saw pod success Jun 29 14:00:40.966: INFO: Pod "pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5" satisfied condition "success or failure" Jun 29 14:00:40.969: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5 container projected-configmap-volume-test: STEP: delete the pod Jun 29 14:00:40.993: INFO: Waiting for pod pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5 to disappear Jun 29 14:00:41.006: INFO: Pod pod-projected-configmaps-296b534f-7e2a-4e43-a948-1053d05f75f5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:00:41.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6663" for this suite. Jun 29 14:00:47.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:00:47.135: INFO: namespace projected-6663 deletion completed in 6.124989774s • [SLOW TEST:10.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:00:47.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:00:47.246: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9b16588c-e9b4-449a-b13a-0a1b3ae6f5c1", Controller:(*bool)(0xc002aa2c12), BlockOwnerDeletion:(*bool)(0xc002aa2c13)}} Jun 29 14:00:47.260: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c5e5dafd-dbda-4a91-91c8-9b484b8cdb53", Controller:(*bool)(0xc0026b729a), BlockOwnerDeletion:(*bool)(0xc0026b729b)}} Jun 29 14:00:47.265: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"48aaf1fe-2c92-4063-80ce-33806416b421", Controller:(*bool)(0xc0026b742a), BlockOwnerDeletion:(*bool)(0xc0026b742b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:00:52.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7078" for this suite. Jun 29 14:00:58.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:00:58.400: INFO: namespace gc-7078 deletion completed in 6.096300282s • [SLOW TEST:11.264 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:00:58.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:01:03.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7241" for this suite. Jun 29 14:01:10.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:01:10.173: INFO: namespace watch-7241 deletion completed in 6.210929333s • [SLOW TEST:11.772 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:01:10.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 29 14:01:10.244: INFO: Waiting up to 5m0s for pod "pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7" in namespace "emptydir-1164" to be "success or failure" Jun 29 14:01:10.260: INFO: Pod "pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108452ms Jun 29 14:01:12.264: INFO: Pod "pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0207152s Jun 29 14:01:14.269: INFO: Pod "pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025770896s STEP: Saw pod success Jun 29 14:01:14.269: INFO: Pod "pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7" satisfied condition "success or failure" Jun 29 14:01:14.273: INFO: Trying to get logs from node iruya-worker2 pod pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7 container test-container: STEP: delete the pod Jun 29 14:01:14.308: INFO: Waiting for pod pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7 to disappear Jun 29 14:01:14.314: INFO: Pod pod-0ff536b3-cfc0-4c9d-b954-ee976605e7a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:01:14.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1164" for this suite. Jun 29 14:01:20.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:01:20.401: INFO: namespace emptydir-1164 deletion completed in 6.083686574s • [SLOW TEST:10.228 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:01:20.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 29 14:01:20.490: INFO: Waiting up to 5m0s for pod "pod-ce2a459b-366f-476e-a628-db8a91b47af7" in namespace "emptydir-177" to be "success or failure" Jun 29 14:01:20.541: INFO: Pod "pod-ce2a459b-366f-476e-a628-db8a91b47af7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.753694ms Jun 29 14:01:22.545: INFO: Pod "pod-ce2a459b-366f-476e-a628-db8a91b47af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055764769s Jun 29 14:01:24.550: INFO: Pod "pod-ce2a459b-366f-476e-a628-db8a91b47af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060127503s STEP: Saw pod success Jun 29 14:01:24.550: INFO: Pod "pod-ce2a459b-366f-476e-a628-db8a91b47af7" satisfied condition "success or failure" Jun 29 14:01:24.553: INFO: Trying to get logs from node iruya-worker pod pod-ce2a459b-366f-476e-a628-db8a91b47af7 container test-container: STEP: delete the pod Jun 29 14:01:24.579: INFO: Waiting for pod pod-ce2a459b-366f-476e-a628-db8a91b47af7 to disappear Jun 29 14:01:24.595: INFO: Pod pod-ce2a459b-366f-476e-a628-db8a91b47af7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:01:24.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-177" for this suite. Jun 29 14:01:30.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:01:30.731: INFO: namespace emptydir-177 deletion completed in 6.132739822s • [SLOW TEST:10.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:01:30.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 29 14:01:30.892: INFO: Waiting up to 5m0s for pod "downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f" in namespace "downward-api-1026" to be "success or failure" Jun 29 14:01:30.895: INFO: Pod "downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649813ms Jun 29 14:01:32.900: INFO: Pod "downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008180174s Jun 29 14:01:34.904: INFO: Pod "downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012149415s STEP: Saw pod success Jun 29 14:01:34.904: INFO: Pod "downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f" satisfied condition "success or failure" Jun 29 14:01:34.907: INFO: Trying to get logs from node iruya-worker pod downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f container dapi-container: STEP: delete the pod Jun 29 14:01:34.942: INFO: Waiting for pod downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f to disappear Jun 29 14:01:34.955: INFO: Pod downward-api-3df8cfc1-be92-44c1-b631-c572983ec02f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:01:34.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1026" for this suite. Jun 29 14:01:40.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:01:41.056: INFO: namespace downward-api-1026 deletion completed in 6.097143042s • [SLOW TEST:10.324 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:01:41.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:01:41.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259" in namespace "projected-8347" to be "success or failure" Jun 29 14:01:41.159: INFO: Pod "downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259": Phase="Pending", Reason="", readiness=false. Elapsed: 12.851912ms Jun 29 14:01:43.299: INFO: Pod "downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153413206s Jun 29 14:01:45.303: INFO: Pod "downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156833629s STEP: Saw pod success Jun 29 14:01:45.303: INFO: Pod "downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259" satisfied condition "success or failure" Jun 29 14:01:45.305: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259 container client-container: STEP: delete the pod Jun 29 14:01:45.537: INFO: Waiting for pod downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259 to disappear Jun 29 14:01:45.614: INFO: Pod downwardapi-volume-db11c878-0a0d-4698-93e0-00b16c06b259 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:01:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8347" for this suite. Jun 29 14:01:51.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:01:51.813: INFO: namespace projected-8347 deletion completed in 6.194855444s • [SLOW TEST:10.756 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:01:51.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:01:51.895: INFO: Creating deployment "nginx-deployment" Jun 29 14:01:51.919: INFO: Waiting for observed generation 1 Jun 29 14:01:53.929: INFO: Waiting for all required pods to come up Jun 29 14:01:53.934: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 29 14:02:03.951: INFO: Waiting for deployment "nginx-deployment" to complete Jun 29 14:02:03.956: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 29 14:02:03.960: INFO: Updating deployment nginx-deployment Jun 29 14:02:03.960: INFO: Waiting for observed generation 2 Jun 29 14:02:05.982: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 29 14:02:05.984: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 29 14:02:05.986: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 29 14:02:05.992: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 29 14:02:05.992: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 29 14:02:05.993: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 29 14:02:05.996: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 29 14:02:05.996: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 29 14:02:06.001: INFO: Updating deployment nginx-deployment Jun 29 14:02:06.001: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 29 14:02:06.058: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 29 14:02:06.188: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 29 14:02:08.766: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/deployments/nginx-deployment,UID:7907f8fc-658a-4613-a0cf-41aa04c3c907,ResourceVersion:19116359,Generation:3,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-06-29 14:02:06 +0000 UTC 2020-06-29 14:02:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-29 14:02:06 +0000 UTC 2020-06-29 14:01:51 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 29 14:02:08.952: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/replicasets/nginx-deployment-55fb7cb77f,UID:cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3,ResourceVersion:19116352,Generation:3,CreationTimestamp:2020-06-29 14:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7907f8fc-658a-4613-a0cf-41aa04c3c907 0xc00272c8d7 0xc00272c8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 14:02:08.952: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 29 14:02:08.952: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-408,SelfLink:/apis/apps/v1/namespaces/deployment-408/replicasets/nginx-deployment-7b8c6f4498,UID:9f012ec6-36c6-4894-9040-f2e98bcbd2c1,ResourceVersion:19116346,Generation:3,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7907f8fc-658a-4613-a0cf-41aa04c3c907 0xc00272c9b7 0xc00272c9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 29 14:02:08.959: INFO: Pod "nginx-deployment-55fb7cb77f-4k299" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4k299,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-4k299,UID:60b93883-b853-443b-9b93-bad878638b9a,ResourceVersion:19116421,Generation:0,CreationTimestamp:2020-06-29 14:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025dec27 0xc0025dec28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025deca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025decc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.92,StartTime:2020-06-29 14:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.959: INFO: Pod "nginx-deployment-55fb7cb77f-79nb7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-79nb7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-79nb7,UID:2e606a5c-776a-41cb-8c30-56e80434508a,ResourceVersion:19116286,Generation:0,CreationTimestamp:2020-06-29 14:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025dedc7 0xc0025dedc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dee40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025dee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.959: INFO: Pod "nginx-deployment-55fb7cb77f-84jfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-84jfs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-84jfs,UID:06d3fb23-b83a-421f-ac4a-eeb762cca3b2,ResourceVersion:19116423,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025def37 0xc0025def38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025defb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025defd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.959: INFO: Pod "nginx-deployment-55fb7cb77f-chhd9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-chhd9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-chhd9,UID:6403acb7-9735-4256-96e3-2053fb2de73d,ResourceVersion:19116283,Generation:0,CreationTimestamp:2020-06-29 14:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df0a7 0xc0025df0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.960: INFO: Pod "nginx-deployment-55fb7cb77f-d97rm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d97rm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-d97rm,UID:32da235d-cc6a-47f0-8db1-e870e52acac9,ResourceVersion:19116271,Generation:0,CreationTimestamp:2020-06-29 14:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df237 0xc0025df238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.960: INFO: Pod "nginx-deployment-55fb7cb77f-fmj7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fmj7d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-fmj7d,UID:c2308fac-6877-427d-9d23-592fef966359,ResourceVersion:19116386,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df3b7 0xc0025df3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.960: INFO: Pod "nginx-deployment-55fb7cb77f-hvgrt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hvgrt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-hvgrt,UID:174346b2-e886-4bf2-90c6-627a0360bd80,ResourceVersion:19116370,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df527 0xc0025df528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.961: INFO: Pod "nginx-deployment-55fb7cb77f-l774b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l774b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-l774b,UID:e946704d-df82-4176-bd59-2a1d5704cb27,ResourceVersion:19116380,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df697 0xc0025df698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df710} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.961: INFO: Pod "nginx-deployment-55fb7cb77f-m9cnm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m9cnm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-m9cnm,UID:e4a63445-c3d7-4a5a-9848-a872f0b4c668,ResourceVersion:19116257,Generation:0,CreationTimestamp:2020-06-29 14:02:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df817 0xc0025df818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025df890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025df8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:03 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.961: INFO: Pod "nginx-deployment-55fb7cb77f-pw4d7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pw4d7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-pw4d7,UID:f49f0dab-16ce-4848-be61-cffe4790ceb0,ResourceVersion:19116365,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025df987 0xc0025df988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dfa00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025dfa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.961: INFO: Pod "nginx-deployment-55fb7cb77f-pwm4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pwm4t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-pwm4t,UID:d36fb363-6fe4-41fd-97ea-1eb0e780f680,ResourceVersion:19116388,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025dfb17 0xc0025dfb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dfb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025dfbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.962: INFO: Pod "nginx-deployment-55fb7cb77f-t72rv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t72rv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-t72rv,UID:d87acc5a-4641-4385-b61e-59f25b4422e1,ResourceVersion:19116404,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025dfc87 0xc0025dfc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dfd00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025dfd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.962: INFO: Pod "nginx-deployment-55fb7cb77f-wgw58" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgw58,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-55fb7cb77f-wgw58,UID:94653d6c-fc0b-4161-8c47-63484c01908b,ResourceVersion:19116367,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f cda7a3dd-4fd0-4bda-8ccc-0c36b525faf3 0xc0025dfdf7 0xc0025dfdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dfe70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025dfe90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.962: INFO: Pod "nginx-deployment-7b8c6f4498-2kgbn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2kgbn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-2kgbn,UID:cb7e59dd-5369-4306-b627-33d4c86ce4e0,ResourceVersion:19116174,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0025dff67 0xc0025dff68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025dffe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.28,StartTime:2020-06-29 14:01:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:01:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://21c8d17ce98d1374ae6f10f7d2ecc8cb8f94236d8095f0eadfad300b42614731}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.962: INFO: Pod "nginx-deployment-7b8c6f4498-4b8wp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4b8wp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-4b8wp,UID:1b4450d6-6d84-4c8e-9d8e-7d5c8c9e5fa8,ResourceVersion:19116405,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026fe2f7 0xc0026fe2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.962: INFO: Pod "nginx-deployment-7b8c6f4498-5j9hw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5j9hw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-5j9hw,UID:7cff8fcd-2205-497d-b636-77f55e2eee15,ResourceVersion:19116351,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026fe7c7 0xc0026fe7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fe910} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fe930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.963: INFO: Pod "nginx-deployment-7b8c6f4498-76f56" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76f56,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-76f56,UID:4d4c195d-67ea-4038-93e7-de11385878ad,ResourceVersion:19116202,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026feb57 0xc0026feb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fee90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026feeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.30,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:01:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ec1739727bfdac6a4447a199d612f1d9712bf260c74925d8ea339a76081e97b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.963: INFO: Pod "nginx-deployment-7b8c6f4498-97p6w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-97p6w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-97p6w,UID:a2c0bf23-978b-48c0-8709-76902bc4d53c,ResourceVersion:19116186,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026ff477 0xc0026ff478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ff690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ff760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.29,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d64d833c28b1e237bfcac6c8947942baa1a8e37a36788485c9eeefd64f143e30}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.963: INFO: Pod "nginx-deployment-7b8c6f4498-9xltq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9xltq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-9xltq,UID:db8b46f7-6315-4d50-9af0-c39548bda7d7,ResourceVersion:19116411,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026ffc57 0xc0026ffc58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ffcd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ffcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.963: INFO: Pod "nginx-deployment-7b8c6f4498-ccvpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ccvpc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-ccvpc,UID:ec360204-9797-40c5-989d-28dfb105ebab,ResourceVersion:19116373,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026ffdb7 0xc0026ffdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ffe30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ffe50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.963: INFO: Pod "nginx-deployment-7b8c6f4498-g8gx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g8gx8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-g8gx8,UID:bce0888f-5b9c-4a1f-8778-8a705bffaf3c,ResourceVersion:19116355,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0026fff17 0xc0026fff18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026fff90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026fffb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-h54tm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h54tm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-h54tm,UID:39e970d3-8313-4ef8-ab09-ba7614bdf848,ResourceVersion:19116184,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8077 0xc0030e8078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e80f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.87,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2635f5b493b4280461298fa41869a6c9c23bfe62e3500881f994ef11e493b78f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-m4kvv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m4kvv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-m4kvv,UID:b905f97b-681d-4d16-a6bf-e33db1eb2d8a,ResourceVersion:19116219,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e81e7 0xc0030e81e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.90,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:02:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d743d6cb9e1bf20f173dcba266f053f6a46142550a54ea66b2085c1c71bf3d28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-nmwvg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nmwvg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-nmwvg,UID:27e23d27-2274-4169-8b4d-ddecf777b649,ResourceVersion:19116332,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8357 0xc0030e8358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e83d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e83f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-rqctj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rqctj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-rqctj,UID:9cc12aa0-e2a3-463c-9935-89420ebdaa79,ResourceVersion:19116196,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e84b7 0xc0030e84b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.88,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:01:59 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e518216045cfa88703201a51f53588193f71d3357eea19e6a928fcc1c3645959}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-rxfgd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rxfgd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-rxfgd,UID:a909c0b3-a995-4117-8795-5c0fe6786c02,ResourceVersion:19116376,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8627 0xc0030e8628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e86a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e86c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.964: INFO: Pod "nginx-deployment-7b8c6f4498-sw68w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sw68w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-sw68w,UID:be3d0bd9-1222-4302-b1c4-94a4df5de90c,ResourceVersion:19116408,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e87a7 0xc0030e87a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-wmsvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wmsvx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-wmsvx,UID:0b541f4d-6f43-4b82-8a66-92d1a3ec01bd,ResourceVersion:19116417,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8907 0xc0030e8908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e89a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-wvjrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wvjrg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-wvjrg,UID:881d14ba-4684-4606-a606-09bdb63baf5f,ResourceVersion:19116382,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8a67 0xc0030e8a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-xbkgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xbkgs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-xbkgs,UID:09edaf27-b610-4f21-b37c-3b724f103b80,ResourceVersion:19116211,Generation:0,CreationTimestamp:2020-06-29 14:01:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8bc7 0xc0030e8bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.32,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:02:00 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://691734a16f970b5e6c15083c785d485300b1dbb433be732ff43ce828bb4ea78e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-xjbm6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xjbm6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-xjbm6,UID:e78f797e-da2f-40f5-bcb4-c26f86a2fc87,ResourceVersion:19116392,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8d37 0xc0030e8d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-zbbw6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zbbw6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-zbbw6,UID:9353f206-c5bb-4be0-ad91-36253c991a6b,ResourceVersion:19116362,Generation:0,CreationTimestamp:2020-06-29 14:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8e97 0xc0030e8e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-29 14:02:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 29 14:02:08.965: INFO: Pod "nginx-deployment-7b8c6f4498-zwzwk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwzwk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-408,SelfLink:/api/v1/namespaces/deployment-408/pods/nginx-deployment-7b8c6f4498-zwzwk,UID:7b75f182-13b2-4936-9398-5f81d396d854,ResourceVersion:19116225,Generation:0,CreationTimestamp:2020-06-29 14:01:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9f012ec6-36c6-4894-9040-f2e98bcbd2c1 0xc0030e8ff7 0xc0030e8ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mv9rg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mv9rg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mv9rg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e9070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e9090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:02:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:01:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.89,StartTime:2020-06-29 14:01:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-29 14:02:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bcc6501565d8703a9463ab5c0f29eb08bd799b87a467ae1b4a6156660efd596f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:02:08.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-408" for this suite. Jun 29 14:02:28.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:02:28.094: INFO: namespace deployment-408 deletion completed in 18.56575946s • [SLOW TEST:36.280 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:02:28.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 29 14:02:28.271: INFO: Waiting up to 5m0s for pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4" in namespace "downward-api-5914" to be "success or failure" Jun 29 14:02:28.300: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.290224ms Jun 29 14:02:30.328: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057559215s Jun 29 14:02:32.332: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061177282s Jun 29 14:02:34.357: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Running", Reason="", readiness=true. Elapsed: 6.086354539s Jun 29 14:02:36.360: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Running", Reason="", readiness=true. Elapsed: 8.089074396s Jun 29 14:02:38.365: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094344757s STEP: Saw pod success Jun 29 14:02:38.365: INFO: Pod "downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4" satisfied condition "success or failure" Jun 29 14:02:38.370: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4 container dapi-container: STEP: delete the pod Jun 29 14:02:38.389: INFO: Waiting for pod downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4 to disappear Jun 29 14:02:38.394: INFO: Pod downward-api-b365a33e-bc38-499a-9b88-b5da545b86c4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:02:38.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5914" for this suite. Jun 29 14:02:44.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:02:44.489: INFO: namespace downward-api-5914 deletion completed in 6.092449688s • [SLOW TEST:16.395 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:02:44.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 29 14:02:44.568: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116757,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 29 14:02:44.568: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116757,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 29 14:02:54.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116777,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 29 14:02:54.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116777,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 29 14:03:04.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116798,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 29 14:03:04.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116798,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 29 14:03:14.593: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116820,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 29 14:03:14.593: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-a,UID:5c60d8ab-8c7a-4b5c-898d-d2ba5e0fc7cd,ResourceVersion:19116820,Generation:0,CreationTimestamp:2020-06-29 14:02:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 29 14:03:24.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-b,UID:ebc0db3e-57c3-4cad-8bd1-56e2027401a9,ResourceVersion:19116840,Generation:0,CreationTimestamp:2020-06-29 14:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 29 14:03:24.602: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-b,UID:ebc0db3e-57c3-4cad-8bd1-56e2027401a9,ResourceVersion:19116840,Generation:0,CreationTimestamp:2020-06-29 14:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 29 14:03:34.608: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-b,UID:ebc0db3e-57c3-4cad-8bd1-56e2027401a9,ResourceVersion:19116860,Generation:0,CreationTimestamp:2020-06-29 14:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 29 14:03:34.608: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7637,SelfLink:/api/v1/namespaces/watch-7637/configmaps/e2e-watch-test-configmap-b,UID:ebc0db3e-57c3-4cad-8bd1-56e2027401a9,ResourceVersion:19116860,Generation:0,CreationTimestamp:2020-06-29 14:03:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:03:44.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7637" for this suite. Jun 29 14:03:50.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:03:50.716: INFO: namespace watch-7637 deletion completed in 6.101792518s • [SLOW TEST:66.227 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:03:50.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-jf7f STEP: Creating a pod to test atomic-volume-subpath Jun 29 14:03:50.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jf7f" in namespace "subpath-1706" to be "success or failure" Jun 29 14:03:50.862: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231872ms Jun 29 14:03:52.866: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008231326s Jun 29 14:03:54.870: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.012461921s Jun 29 14:03:56.875: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 6.017101308s Jun 29 14:03:58.879: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 8.0210376s Jun 29 14:04:00.883: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 10.02491971s Jun 29 14:04:02.887: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 12.029509052s Jun 29 14:04:04.891: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 14.032783828s Jun 29 14:04:06.895: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 16.037460411s Jun 29 14:04:08.900: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 18.042156607s Jun 29 14:04:10.904: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 20.046218654s Jun 29 14:04:12.909: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Running", Reason="", readiness=true. Elapsed: 22.051363407s Jun 29 14:04:14.914: INFO: Pod "pod-subpath-test-downwardapi-jf7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056029779s STEP: Saw pod success Jun 29 14:04:14.914: INFO: Pod "pod-subpath-test-downwardapi-jf7f" satisfied condition "success or failure" Jun 29 14:04:14.918: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-jf7f container test-container-subpath-downwardapi-jf7f: STEP: delete the pod Jun 29 14:04:14.955: INFO: Waiting for pod pod-subpath-test-downwardapi-jf7f to disappear Jun 29 14:04:14.961: INFO: Pod pod-subpath-test-downwardapi-jf7f no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jf7f Jun 29 14:04:14.961: INFO: Deleting pod "pod-subpath-test-downwardapi-jf7f" in namespace "subpath-1706" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:04:14.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1706" for this suite. Jun 29 14:04:20.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:04:21.061: INFO: namespace subpath-1706 deletion completed in 6.094489399s • [SLOW TEST:30.345 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:04:21.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:04:21.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20" in namespace "downward-api-1114" to be "success or failure" Jun 29 14:04:21.128: INFO: Pod "downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20": Phase="Pending", Reason="", readiness=false. Elapsed: 3.050305ms Jun 29 14:04:23.133: INFO: Pod "downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007405247s Jun 29 14:04:25.136: INFO: Pod "downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011055972s STEP: Saw pod success Jun 29 14:04:25.136: INFO: Pod "downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20" satisfied condition "success or failure" Jun 29 14:04:25.139: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20 container client-container: STEP: delete the pod Jun 29 14:04:25.251: INFO: Waiting for pod downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20 to disappear Jun 29 14:04:25.296: INFO: Pod downwardapi-volume-d0201e78-9349-4707-ada2-133a1e5d0a20 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:04:25.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1114" for this suite. Jun 29 14:04:31.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:04:31.417: INFO: namespace downward-api-1114 deletion completed in 6.117556347s • [SLOW TEST:10.356 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:04:31.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:04:31.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7857" for this suite. Jun 29 14:04:37.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:04:37.588: INFO: namespace services-7857 deletion completed in 6.102374399s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.170 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:04:37.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:04:37.673: INFO: Create a RollingUpdate DaemonSet Jun 29 14:04:37.676: INFO: Check that daemon pods launch on every node of the cluster Jun 29 14:04:37.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:37.707: INFO: Number of nodes with available pods: 0 Jun 29 14:04:37.707: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:04:38.713: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:38.717: INFO: Number of nodes with available pods: 0 Jun 29 14:04:38.717: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:04:39.713: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:39.716: INFO: Number of nodes with available pods: 0 Jun 29 14:04:39.716: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:04:40.713: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:40.716: INFO: Number of nodes with available pods: 0 Jun 29 14:04:40.716: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:04:41.712: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:41.716: INFO: Number of nodes with available pods: 0 Jun 29 14:04:41.716: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:04:42.713: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:42.717: INFO: Number of nodes with available pods: 2 Jun 29 14:04:42.717: INFO: Number of running nodes: 2, number of available pods: 2 Jun 29 14:04:42.717: INFO: Update the DaemonSet to trigger a rollout Jun 29 14:04:42.723: INFO: Updating DaemonSet daemon-set Jun 29 14:04:52.744: INFO: Roll back the DaemonSet before rollout is complete Jun 29 14:04:52.751: INFO: Updating DaemonSet daemon-set Jun 29 14:04:52.751: INFO: Make sure DaemonSet rollback is complete Jun 29 14:04:52.771: INFO: Wrong image for pod: daemon-set-72lr9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 29 14:04:52.772: INFO: Pod daemon-set-72lr9 is not available Jun 29 14:04:52.775: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:53.780: INFO: Wrong image for pod: daemon-set-72lr9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 29 14:04:53.780: INFO: Pod daemon-set-72lr9 is not available Jun 29 14:04:53.784: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:54.779: INFO: Wrong image for pod: daemon-set-72lr9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 29 14:04:54.779: INFO: Pod daemon-set-72lr9 is not available Jun 29 14:04:54.783: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:04:55.780: INFO: Pod daemon-set-npszv is not available Jun 29 14:04:55.784: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8308, will wait for the garbage collector to delete the pods Jun 29 14:04:55.851: INFO: Deleting DaemonSet.extensions daemon-set took: 7.353293ms Jun 29 14:04:56.152: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267059ms Jun 29 14:04:59.055: INFO: Number of nodes with available pods: 0 Jun 29 14:04:59.055: INFO: Number of running nodes: 0, number of available pods: 0 Jun 29 14:04:59.057: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8308/daemonsets","resourceVersion":"19117161"},"items":null} Jun 29 14:04:59.059: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8308/pods","resourceVersion":"19117161"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:04:59.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8308" for this suite. Jun 29 14:05:05.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:05:05.192: INFO: namespace daemonsets-8308 deletion completed in 6.121087449s • [SLOW TEST:27.603 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:05:05.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 29 14:05:09.340: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:05:09.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2207" for this suite. Jun 29 14:05:15.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:05:15.668: INFO: namespace container-runtime-2207 deletion completed in 6.094406923s • [SLOW TEST:10.476 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:05:15.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8846 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8846 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8846 Jun 29 14:05:15.821: INFO: Found 0 stateful pods, waiting for 1 Jun 29 14:05:25.826: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 29 14:05:25.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:05:28.812: INFO: stderr: "I0629 14:05:28.652094 1962 log.go:172] (0xc000b52370) (0xc000638a00) Create stream\nI0629 14:05:28.652134 1962 log.go:172] (0xc000b52370) (0xc000638a00) Stream added, broadcasting: 1\nI0629 14:05:28.654445 1962 log.go:172] (0xc000b52370) Reply frame received for 1\nI0629 14:05:28.654487 1962 log.go:172] (0xc000b52370) (0xc0002fa000) Create stream\nI0629 14:05:28.654499 1962 log.go:172] (0xc000b52370) (0xc0002fa000) Stream added, broadcasting: 3\nI0629 14:05:28.655492 1962 log.go:172] (0xc000b52370) Reply frame received for 3\nI0629 14:05:28.655561 1962 log.go:172] (0xc000b52370) (0xc000386000) Create stream\nI0629 14:05:28.655590 1962 log.go:172] (0xc000b52370) (0xc000386000) Stream added, broadcasting: 5\nI0629 14:05:28.656502 1962 log.go:172] (0xc000b52370) Reply frame received for 5\nI0629 14:05:28.764045 1962 log.go:172] (0xc000b52370) Data frame received for 5\nI0629 14:05:28.764066 1962 log.go:172] (0xc000386000) (5) Data frame handling\nI0629 14:05:28.764076 1962 log.go:172] (0xc000386000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:05:28.800831 1962 log.go:172] (0xc000b52370) Data frame received for 3\nI0629 14:05:28.800877 1962 log.go:172] (0xc0002fa000) (3) Data frame handling\nI0629 14:05:28.800935 1962 log.go:172] (0xc0002fa000) (3) Data frame sent\nI0629 14:05:28.800973 1962 log.go:172] (0xc000b52370) Data frame received for 5\nI0629 14:05:28.800985 1962 log.go:172] (0xc000386000) (5) Data frame handling\nI0629 14:05:28.801611 1962 log.go:172] (0xc000b52370) Data frame received for 3\nI0629 14:05:28.801640 1962 log.go:172] (0xc0002fa000) (3) Data frame handling\nI0629 14:05:28.803403 1962 log.go:172] (0xc000b52370) Data frame received for 1\nI0629 14:05:28.803426 1962 log.go:172] (0xc000638a00) (1) Data frame handling\nI0629 14:05:28.803437 1962 log.go:172] (0xc000638a00) (1) Data frame sent\nI0629 14:05:28.803449 1962 log.go:172] (0xc000b52370) (0xc000638a00) Stream removed, broadcasting: 1\nI0629 14:05:28.803574 1962 log.go:172] (0xc000b52370) Go away received\nI0629 14:05:28.803761 1962 log.go:172] (0xc000b52370) (0xc000638a00) Stream removed, broadcasting: 1\nI0629 14:05:28.803779 1962 log.go:172] (0xc000b52370) (0xc0002fa000) Stream removed, broadcasting: 3\nI0629 14:05:28.803786 1962 log.go:172] (0xc000b52370) (0xc000386000) Stream removed, broadcasting: 5\n" Jun 29 14:05:28.813: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:05:28.813: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:05:28.817: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 29 14:05:38.822: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:05:38.822: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:05:38.921: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:05:38.921: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:05:38.921: INFO: Jun 29 14:05:38.921: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 29 14:05:39.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989285354s Jun 29 14:05:40.988: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98437224s Jun 29 14:05:42.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.922766565s Jun 29 14:05:43.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.784926442s Jun 29 14:05:44.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.779239245s Jun 29 14:05:45.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.77301081s Jun 29 14:05:46.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.767673302s Jun 29 14:05:47.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.76227607s Jun 29 14:05:48.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 757.234093ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8846 Jun 29 14:05:49.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:05:49.413: INFO: stderr: "I0629 14:05:49.328644 1993 log.go:172] (0xc00012adc0) (0xc00058a6e0) Create stream\nI0629 14:05:49.328696 1993 log.go:172] (0xc00012adc0) (0xc00058a6e0) Stream added, broadcasting: 1\nI0629 14:05:49.331879 1993 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0629 14:05:49.332089 1993 log.go:172] (0xc00012adc0) (0xc00058a000) Create stream\nI0629 14:05:49.332103 1993 log.go:172] (0xc00012adc0) (0xc00058a000) Stream added, broadcasting: 3\nI0629 14:05:49.332943 1993 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0629 14:05:49.332983 1993 log.go:172] (0xc00012adc0) (0xc00058a0a0) Create stream\nI0629 14:05:49.332995 1993 log.go:172] (0xc00012adc0) (0xc00058a0a0) Stream added, broadcasting: 5\nI0629 14:05:49.333915 1993 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0629 14:05:49.406823 1993 log.go:172] (0xc00012adc0) Data frame received for 5\nI0629 14:05:49.406857 1993 log.go:172] (0xc00058a0a0) (5) Data frame handling\nI0629 14:05:49.406868 1993 log.go:172] (0xc00058a0a0) (5) Data frame sent\nI0629 14:05:49.406877 1993 log.go:172] (0xc00012adc0) Data frame received for 5\nI0629 14:05:49.406886 1993 log.go:172] (0xc00058a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:05:49.406910 1993 log.go:172] (0xc00012adc0) Data frame received for 3\nI0629 14:05:49.406920 1993 log.go:172] (0xc00058a000) (3) Data frame handling\nI0629 14:05:49.406929 1993 log.go:172] (0xc00058a000) (3) Data frame sent\nI0629 14:05:49.406939 1993 log.go:172] (0xc00012adc0) Data frame received for 3\nI0629 14:05:49.406946 1993 log.go:172] (0xc00058a000) (3) Data frame handling\nI0629 14:05:49.408824 1993 log.go:172] (0xc00012adc0) Data frame received for 1\nI0629 14:05:49.408839 1993 log.go:172] (0xc00058a6e0) (1) Data frame handling\nI0629 14:05:49.408846 1993 log.go:172] (0xc00058a6e0) (1) Data frame sent\nI0629 14:05:49.408859 1993 log.go:172] (0xc00012adc0) (0xc00058a6e0) Stream removed, broadcasting: 1\nI0629 14:05:49.408879 1993 log.go:172] (0xc00012adc0) Go away received\nI0629 14:05:49.409425 1993 log.go:172] (0xc00012adc0) (0xc00058a6e0) Stream removed, broadcasting: 1\nI0629 14:05:49.409440 1993 log.go:172] (0xc00012adc0) (0xc00058a000) Stream removed, broadcasting: 3\nI0629 14:05:49.409445 1993 log.go:172] (0xc00012adc0) (0xc00058a0a0) Stream removed, broadcasting: 5\n" Jun 29 14:05:49.413: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:05:49.413: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:05:49.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:05:49.641: INFO: stderr: "I0629 14:05:49.544328 2013 log.go:172] (0xc00012afd0) (0xc000674be0) Create stream\nI0629 14:05:49.544377 2013 log.go:172] (0xc00012afd0) (0xc000674be0) Stream added, broadcasting: 1\nI0629 14:05:49.546914 2013 log.go:172] (0xc00012afd0) Reply frame received for 1\nI0629 14:05:49.546973 2013 log.go:172] (0xc00012afd0) (0xc0007e0000) Create stream\nI0629 14:05:49.546995 2013 log.go:172] (0xc00012afd0) (0xc0007e0000) Stream added, broadcasting: 3\nI0629 14:05:49.547828 2013 log.go:172] (0xc00012afd0) Reply frame received for 3\nI0629 14:05:49.547860 2013 log.go:172] (0xc00012afd0) (0xc0008b4000) Create stream\nI0629 14:05:49.547872 2013 log.go:172] (0xc00012afd0) (0xc0008b4000) Stream added, broadcasting: 5\nI0629 14:05:49.548527 2013 log.go:172] (0xc00012afd0) Reply frame received for 5\nI0629 14:05:49.630334 2013 log.go:172] (0xc00012afd0) Data frame received for 5\nI0629 14:05:49.630363 2013 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0629 14:05:49.630383 2013 log.go:172] (0xc0008b4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:05:49.631519 2013 log.go:172] (0xc00012afd0) Data frame received for 3\nI0629 14:05:49.631539 2013 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0629 14:05:49.631549 2013 log.go:172] (0xc0007e0000) (3) Data frame sent\nI0629 14:05:49.631572 2013 log.go:172] (0xc00012afd0) Data frame received for 5\nI0629 14:05:49.631602 2013 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0629 14:05:49.631730 2013 log.go:172] (0xc0008b4000) (5) Data frame sent\nI0629 14:05:49.631764 2013 log.go:172] (0xc00012afd0) Data frame received for 5\nI0629 14:05:49.631782 2013 log.go:172] (0xc0008b4000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0629 14:05:49.631812 2013 log.go:172] (0xc0008b4000) (5) Data frame sent\nI0629 14:05:49.632218 2013 log.go:172] (0xc00012afd0) Data frame received for 5\nI0629 14:05:49.632264 2013 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0629 14:05:49.632302 2013 log.go:172] (0xc00012afd0) Data frame received for 3\nI0629 14:05:49.632339 2013 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0629 14:05:49.634691 2013 log.go:172] (0xc00012afd0) Data frame received for 1\nI0629 14:05:49.634725 2013 log.go:172] (0xc000674be0) (1) Data frame handling\nI0629 14:05:49.634744 2013 log.go:172] (0xc000674be0) (1) Data frame sent\nI0629 14:05:49.634776 2013 log.go:172] (0xc00012afd0) (0xc000674be0) Stream removed, broadcasting: 1\nI0629 14:05:49.634838 2013 log.go:172] (0xc00012afd0) Go away received\nI0629 14:05:49.635304 2013 log.go:172] (0xc00012afd0) (0xc000674be0) Stream removed, broadcasting: 1\nI0629 14:05:49.635329 2013 log.go:172] (0xc00012afd0) (0xc0007e0000) Stream removed, broadcasting: 3\nI0629 14:05:49.635340 2013 log.go:172] (0xc00012afd0) (0xc0008b4000) Stream removed, broadcasting: 5\n" Jun 29 14:05:49.641: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:05:49.641: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:05:49.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:05:49.859: INFO: stderr: "I0629 14:05:49.766482 2034 log.go:172] (0xc0009022c0) (0xc0008326e0) Create stream\nI0629 14:05:49.766540 2034 log.go:172] (0xc0009022c0) (0xc0008326e0) Stream added, broadcasting: 1\nI0629 14:05:49.775190 2034 log.go:172] (0xc0009022c0) Reply frame received for 1\nI0629 14:05:49.775245 2034 log.go:172] (0xc0009022c0) (0xc0004fa280) Create stream\nI0629 14:05:49.775259 2034 log.go:172] (0xc0009022c0) (0xc0004fa280) Stream added, broadcasting: 3\nI0629 14:05:49.776596 2034 log.go:172] (0xc0009022c0) Reply frame received for 3\nI0629 14:05:49.776626 2034 log.go:172] (0xc0009022c0) (0xc0004fa320) Create stream\nI0629 14:05:49.776635 2034 log.go:172] (0xc0009022c0) (0xc0004fa320) Stream added, broadcasting: 5\nI0629 14:05:49.779389 2034 log.go:172] (0xc0009022c0) Reply frame received for 5\nI0629 14:05:49.850519 2034 log.go:172] (0xc0009022c0) Data frame received for 3\nI0629 14:05:49.850610 2034 log.go:172] (0xc0004fa280) (3) Data frame handling\nI0629 14:05:49.850624 2034 log.go:172] (0xc0004fa280) (3) Data frame sent\nI0629 14:05:49.850632 2034 log.go:172] (0xc0009022c0) Data frame received for 3\nI0629 14:05:49.850637 2034 log.go:172] (0xc0004fa280) (3) Data frame handling\nI0629 14:05:49.850663 2034 log.go:172] (0xc0009022c0) Data frame received for 5\nI0629 14:05:49.850671 2034 log.go:172] (0xc0004fa320) (5) Data frame handling\nI0629 14:05:49.850677 2034 log.go:172] (0xc0004fa320) (5) Data frame sent\nI0629 14:05:49.850684 2034 log.go:172] (0xc0009022c0) Data frame received for 5\nI0629 14:05:49.850689 2034 log.go:172] (0xc0004fa320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0629 14:05:49.852883 2034 log.go:172] (0xc0009022c0) Data frame received for 1\nI0629 14:05:49.852938 2034 log.go:172] (0xc0008326e0) (1) Data frame handling\nI0629 14:05:49.852979 2034 log.go:172] (0xc0008326e0) (1) Data frame sent\nI0629 14:05:49.853029 2034 log.go:172] (0xc0009022c0) (0xc0008326e0) Stream removed, broadcasting: 1\nI0629 14:05:49.853328 2034 log.go:172] (0xc0009022c0) Go away received\nI0629 14:05:49.853465 2034 log.go:172] (0xc0009022c0) (0xc0008326e0) Stream removed, broadcasting: 1\nI0629 14:05:49.853484 2034 log.go:172] (0xc0009022c0) (0xc0004fa280) Stream removed, broadcasting: 3\nI0629 14:05:49.853492 2034 log.go:172] (0xc0009022c0) (0xc0004fa320) Stream removed, broadcasting: 5\n" Jun 29 14:05:49.859: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:05:49.859: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:05:49.865: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 29 14:05:59.923: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 14:05:59.923: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 14:05:59.923: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 29 14:05:59.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:06:00.167: INFO: stderr: "I0629 14:06:00.065395 2054 log.go:172] (0xc000ac2630) (0xc0005d4960) Create stream\nI0629 14:06:00.065450 2054 log.go:172] (0xc000ac2630) (0xc0005d4960) Stream added, broadcasting: 1\nI0629 14:06:00.068223 2054 log.go:172] (0xc000ac2630) Reply frame received for 1\nI0629 14:06:00.068306 2054 log.go:172] (0xc000ac2630) (0xc000918000) Create stream\nI0629 14:06:00.068377 2054 log.go:172] (0xc000ac2630) (0xc000918000) Stream added, broadcasting: 3\nI0629 14:06:00.070074 2054 log.go:172] (0xc000ac2630) Reply frame received for 3\nI0629 14:06:00.070337 2054 log.go:172] (0xc000ac2630) (0xc0005d40a0) Create stream\nI0629 14:06:00.070368 2054 log.go:172] (0xc000ac2630) (0xc0005d40a0) Stream added, broadcasting: 5\nI0629 14:06:00.071473 2054 log.go:172] (0xc000ac2630) Reply frame received for 5\nI0629 14:06:00.159142 2054 log.go:172] (0xc000ac2630) Data frame received for 5\nI0629 14:06:00.159160 2054 log.go:172] (0xc0005d40a0) (5) Data frame handling\nI0629 14:06:00.159170 2054 log.go:172] (0xc0005d40a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:06:00.159645 2054 log.go:172] (0xc000ac2630) Data frame received for 3\nI0629 14:06:00.159662 2054 log.go:172] (0xc000918000) (3) Data frame handling\nI0629 14:06:00.159678 2054 log.go:172] (0xc000918000) (3) Data frame sent\nI0629 14:06:00.159732 2054 log.go:172] (0xc000ac2630) Data frame received for 3\nI0629 14:06:00.159752 2054 log.go:172] (0xc000918000) (3) Data frame handling\nI0629 14:06:00.159829 2054 log.go:172] (0xc000ac2630) Data frame received for 5\nI0629 14:06:00.159846 2054 log.go:172] (0xc0005d40a0) (5) Data frame handling\nI0629 14:06:00.161325 2054 log.go:172] (0xc000ac2630) Data frame received for 1\nI0629 14:06:00.161342 2054 log.go:172] (0xc0005d4960) (1) Data frame handling\nI0629 14:06:00.161350 2054 log.go:172] (0xc0005d4960) (1) Data frame sent\nI0629 14:06:00.161363 2054 log.go:172] (0xc000ac2630) (0xc0005d4960) Stream removed, broadcasting: 1\nI0629 14:06:00.161377 2054 log.go:172] (0xc000ac2630) Go away received\nI0629 14:06:00.161661 2054 log.go:172] (0xc000ac2630) (0xc0005d4960) Stream removed, broadcasting: 1\nI0629 14:06:00.161673 2054 log.go:172] (0xc000ac2630) (0xc000918000) Stream removed, broadcasting: 3\nI0629 14:06:00.161678 2054 log.go:172] (0xc000ac2630) (0xc0005d40a0) Stream removed, broadcasting: 5\n" Jun 29 14:06:00.167: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:06:00.167: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:06:00.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:06:00.429: INFO: stderr: "I0629 14:06:00.283696 2076 log.go:172] (0xc000620420) (0xc000506820) Create stream\nI0629 14:06:00.283750 2076 log.go:172] (0xc000620420) (0xc000506820) Stream added, broadcasting: 1\nI0629 14:06:00.287564 2076 log.go:172] (0xc000620420) Reply frame received for 1\nI0629 14:06:00.287608 2076 log.go:172] (0xc000620420) (0xc000506000) Create stream\nI0629 14:06:00.287617 2076 log.go:172] (0xc000620420) (0xc000506000) Stream added, broadcasting: 3\nI0629 14:06:00.288466 2076 log.go:172] (0xc000620420) Reply frame received for 3\nI0629 14:06:00.288515 2076 log.go:172] (0xc000620420) (0xc00033a280) Create stream\nI0629 14:06:00.288538 2076 log.go:172] (0xc000620420) (0xc00033a280) Stream added, broadcasting: 5\nI0629 14:06:00.290376 2076 log.go:172] (0xc000620420) Reply frame received for 5\nI0629 14:06:00.361884 2076 log.go:172] (0xc000620420) Data frame received for 5\nI0629 14:06:00.361903 2076 log.go:172] (0xc00033a280) (5) Data frame handling\nI0629 14:06:00.361914 2076 log.go:172] (0xc00033a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:06:00.420317 2076 log.go:172] (0xc000620420) Data frame received for 3\nI0629 14:06:00.420358 2076 log.go:172] (0xc000506000) (3) Data frame handling\nI0629 14:06:00.420392 2076 log.go:172] (0xc000506000) (3) Data frame sent\nI0629 14:06:00.420642 2076 log.go:172] (0xc000620420) Data frame received for 5\nI0629 14:06:00.420690 2076 log.go:172] (0xc00033a280) (5) Data frame handling\nI0629 14:06:00.420713 2076 log.go:172] (0xc000620420) Data frame received for 3\nI0629 14:06:00.420730 2076 log.go:172] (0xc000506000) (3) Data frame handling\nI0629 14:06:00.422783 2076 log.go:172] (0xc000620420) Data frame received for 1\nI0629 14:06:00.422805 2076 log.go:172] (0xc000506820) (1) Data frame handling\nI0629 14:06:00.422830 2076 log.go:172] (0xc000506820) (1) Data frame sent\nI0629 14:06:00.422924 2076 log.go:172] (0xc000620420) (0xc000506820) Stream removed, broadcasting: 1\nI0629 14:06:00.422943 2076 log.go:172] (0xc000620420) Go away received\nI0629 14:06:00.423312 2076 log.go:172] (0xc000620420) (0xc000506820) Stream removed, broadcasting: 1\nI0629 14:06:00.423336 2076 log.go:172] (0xc000620420) (0xc000506000) Stream removed, broadcasting: 3\nI0629 14:06:00.423349 2076 log.go:172] (0xc000620420) (0xc00033a280) Stream removed, broadcasting: 5\n" Jun 29 14:06:00.429: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:06:00.429: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:06:00.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:06:00.685: INFO: stderr: "I0629 14:06:00.581492 2095 log.go:172] (0xc00091c2c0) (0xc000308820) Create stream\nI0629 14:06:00.581549 2095 log.go:172] (0xc00091c2c0) (0xc000308820) Stream added, broadcasting: 1\nI0629 14:06:00.584042 2095 log.go:172] (0xc00091c2c0) Reply frame received for 1\nI0629 14:06:00.584078 2095 log.go:172] (0xc00091c2c0) (0xc000734000) Create stream\nI0629 14:06:00.584089 2095 log.go:172] (0xc00091c2c0) (0xc000734000) Stream added, broadcasting: 3\nI0629 14:06:00.584998 2095 log.go:172] (0xc00091c2c0) Reply frame received for 3\nI0629 14:06:00.585031 2095 log.go:172] (0xc00091c2c0) (0xc0003088c0) Create stream\nI0629 14:06:00.585045 2095 log.go:172] (0xc00091c2c0) (0xc0003088c0) Stream added, broadcasting: 5\nI0629 14:06:00.586032 2095 log.go:172] (0xc00091c2c0) Reply frame received for 5\nI0629 14:06:00.649838 2095 log.go:172] (0xc00091c2c0) Data frame received for 5\nI0629 14:06:00.649874 2095 log.go:172] (0xc0003088c0) (5) Data frame handling\nI0629 14:06:00.649909 2095 log.go:172] (0xc0003088c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:06:00.678348 2095 log.go:172] (0xc00091c2c0) Data frame received for 3\nI0629 14:06:00.678400 2095 log.go:172] (0xc000734000) (3) Data frame handling\nI0629 14:06:00.678422 2095 log.go:172] (0xc000734000) (3) Data frame sent\nI0629 14:06:00.678447 2095 log.go:172] (0xc00091c2c0) Data frame received for 3\nI0629 14:06:00.678474 2095 log.go:172] (0xc000734000) (3) Data frame handling\nI0629 14:06:00.678502 2095 log.go:172] (0xc00091c2c0) Data frame received for 5\nI0629 14:06:00.678522 2095 log.go:172] (0xc0003088c0) (5) Data frame handling\nI0629 14:06:00.680235 2095 log.go:172] (0xc00091c2c0) Data frame received for 1\nI0629 14:06:00.680248 2095 log.go:172] (0xc000308820) (1) Data frame handling\nI0629 14:06:00.680256 2095 log.go:172] (0xc000308820) (1) Data frame sent\nI0629 14:06:00.680556 2095 log.go:172] (0xc00091c2c0) (0xc000308820) Stream removed, broadcasting: 1\nI0629 14:06:00.680616 2095 log.go:172] (0xc00091c2c0) Go away received\nI0629 14:06:00.681037 2095 log.go:172] (0xc00091c2c0) (0xc000308820) Stream removed, broadcasting: 1\nI0629 14:06:00.681063 2095 log.go:172] (0xc00091c2c0) (0xc000734000) Stream removed, broadcasting: 3\nI0629 14:06:00.681075 2095 log.go:172] (0xc00091c2c0) (0xc0003088c0) Stream removed, broadcasting: 5\n" Jun 29 14:06:00.685: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:06:00.685: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:06:00.685: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:06:00.706: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jun 29 14:06:10.713: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:06:10.714: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:06:10.714: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:06:10.723: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:10.723: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:10.724: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:10.724: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:10.724: INFO: Jun 29 14:06:10.724: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:11.728: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:11.728: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:11.728: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:11.728: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:11.728: INFO: Jun 29 14:06:11.728: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:12.732: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:12.733: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:12.733: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:12.733: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:12.733: INFO: Jun 29 14:06:12.733: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:13.738: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:13.738: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:13.738: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:13.738: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:13.738: INFO: Jun 29 14:06:13.738: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:14.743: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:14.743: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:14.743: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:14.743: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:14.743: INFO: Jun 29 14:06:14.743: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:15.748: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:15.749: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:15.749: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:15.749: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:15.749: INFO: Jun 29 14:06:15.749: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:16.753: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:16.753: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:16.753: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:16.753: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:16.753: INFO: Jun 29 14:06:16.753: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:17.759: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:17.759: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:17.759: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:17.759: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:17.759: INFO: Jun 29 14:06:17.759: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:18.763: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:18.763: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:18.763: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:18.763: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:18.763: INFO: Jun 29 14:06:18.763: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 29 14:06:19.770: INFO: POD NODE PHASE GRACE CONDITIONS Jun 29 14:06:19.770: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:15 +0000 UTC }] Jun 29 14:06:19.770: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:19.770: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:06:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:05:38 +0000 UTC }] Jun 29 14:06:19.770: INFO: Jun 29 14:06:19.770: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8846 Jun 29 14:06:20.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:06:20.902: INFO: rc: 1 Jun 29 14:06:20.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002ede510 exit status 1 true [0xc0003d6860 0xc0003d68d0 0xc0003d6938] [0xc0003d6860 0xc0003d68d0 0xc0003d6938] [0xc0003d68a8 0xc0003d6918] [0xba70e0 0xba70e0] 0xc001ac3ce0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 29 14:06:30.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:06:31.001: INFO: rc: 1 Jun 29 14:06:31.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ede5d0 exit status 1 true [0xc0003d6950 0xc0003d69b8 0xc0003d69e0] [0xc0003d6950 0xc0003d69b8 0xc0003d69e0] [0xc0003d6990 0xc0003d69d0] [0xba70e0 0xba70e0] 0xc001ada000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:06:41.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:06:41.110: INFO: rc: 1 Jun 29 14:06:41.110: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f16720 exit status 1 true [0xc0014105c8 0xc0014105e0 0xc0014105f8] [0xc0014105c8 0xc0014105e0 0xc0014105f8] [0xc0014105d8 0xc0014105f0] [0xba70e0 0xba70e0] 0xc002f5d380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:06:51.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:06:51.223: INFO: rc: 1 Jun 29 14:06:51.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ba210 exit status 1 true [0xc000187410 0xc000187428 0xc000187440] [0xc000187410 0xc000187428 0xc000187440] [0xc000187420 0xc000187438] [0xba70e0 0xba70e0] 0xc002abf320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:01.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:01.319: INFO: rc: 1 Jun 29 14:07:01.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ba300 exit status 1 true [0xc000187448 0xc000187460 0xc000187478] [0xc000187448 0xc000187460 0xc000187478] [0xc000187458 0xc000187470] [0xba70e0 0xba70e0] 0xc002abf680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:11.445: INFO: rc: 1 Jun 29 14:07:11.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f167e0 exit status 1 true [0xc001410600 0xc001410618 0xc001410630] [0xc001410600 0xc001410618 0xc001410630] [0xc001410610 0xc001410628] [0xba70e0 0xba70e0] 0xc002f5d680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:21.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:21.555: INFO: rc: 1 Jun 29 14:07:21.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eca090 exit status 1 true [0xc000010848 0xc0000113d0 0xc0000117f8] [0xc000010848 0xc0000113d0 0xc0000117f8] [0xc000010d18 0xc0000116e0] [0xba70e0 0xba70e0] 0xc001ac2000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:31.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:31.690: INFO: rc: 1 Jun 29 14:07:31.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2120 exit status 1 true [0xc000d6c030 0xc000d6c4a8 0xc000d6c898] [0xc000d6c030 0xc000d6c4a8 0xc000d6c898] [0xc000d6c288 0xc000d6c730] [0xba70e0 0xba70e0] 0xc00215bec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:41.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:41.789: INFO: rc: 1 Jun 29 14:07:41.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858090 exit status 1 true [0xc000717db8 0xc0027e8000 0xc0027e8018] [0xc000717db8 0xc0027e8000 0xc0027e8018] [0xc000717fd8 0xc0027e8010] [0xba70e0 0xba70e0] 0xc0019be000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:07:51.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:07:51.889: INFO: rc: 1 Jun 29 14:07:51.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858150 exit status 1 true [0xc0027e8020 0xc0027e8038 0xc0027e8050] [0xc0027e8020 0xc0027e8038 0xc0027e8050] [0xc0027e8030 0xc0027e8048] [0xba70e0 0xba70e0] 0xc0018eefc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:01.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:01.986: INFO: rc: 1 Jun 29 14:08:01.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002124720 exit status 1 true [0xc0026c2008 0xc0026c2048 0xc0026c2098] [0xc0026c2008 0xc0026c2048 0xc0026c2098] [0xc0026c2040 0xc0026c2080] [0xba70e0 0xba70e0] 0xc001afb320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:11.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:12.086: INFO: rc: 1 Jun 29 14:08:12.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2240 exit status 1 true [0xc000d6ca68 0xc000d6cbc0 0xc000d6d188] [0xc000d6ca68 0xc000d6cbc0 0xc000d6d188] [0xc000d6cb60 0xc000d6cec0] [0xba70e0 0xba70e0] 0xc0017fe480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:22.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:22.180: INFO: rc: 1 Jun 29 14:08:22.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2330 exit status 1 true [0xc000d6d2e0 0xc000d6d638 0xc000d6daf0] [0xc000d6d2e0 0xc000d6d638 0xc000d6daf0] [0xc000d6d4b8 0xc000d6d7d0] [0xba70e0 0xba70e0] 0xc0017fea20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:32.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:32.274: INFO: rc: 1 Jun 29 14:08:32.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f23f0 exit status 1 true [0xc000d6dc20 0xc000d6dd58 0xc0003d6118] [0xc000d6dc20 0xc000d6dd58 0xc0003d6118] [0xc000d6dce0 0xc000d6df30] [0xba70e0 0xba70e0] 0xc0017ffec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:42.389: INFO: rc: 1 Jun 29 14:08:42.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f24b0 exit status 1 true [0xc0003d6160 0xc0003d6360 0xc0003d6428] [0xc0003d6160 0xc0003d6360 0xc0003d6428] [0xc0003d6300 0xc0003d6410] [0xba70e0 0xba70e0] 0xc001350060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:08:52.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:08:52.495: INFO: rc: 1 Jun 29 14:08:52.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f25a0 exit status 1 true [0xc0003d6438 0xc0003d64d8 0xc0003d6548] [0xc0003d6438 0xc0003d64d8 0xc0003d6548] [0xc0003d6488 0xc0003d6538] [0xba70e0 0xba70e0] 0xc0020f0ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:02.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:02.594: INFO: rc: 1 Jun 29 14:09:02.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858270 exit status 1 true [0xc0027e8058 0xc0027e8078 0xc0027e80a8] [0xc0027e8058 0xc0027e8078 0xc0027e80a8] [0xc0027e8068 0xc0027e80a0] [0xba70e0 0xba70e0] 0xc0029f0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:12.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:12.689: INFO: rc: 1 Jun 29 14:09:12.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858390 exit status 1 true [0xc0027e80b8 0xc0027e80d0 0xc0027e80e8] [0xc0027e80b8 0xc0027e80d0 0xc0027e80e8] [0xc0027e80c8 0xc0027e80e0] [0xba70e0 0xba70e0] 0xc0029f0a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:22.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:22.788: INFO: rc: 1 Jun 29 14:09:22.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eca0f0 exit status 1 true [0xc000717db8 0xc000d6c030 0xc000d6c4a8] [0xc000717db8 0xc000d6c030 0xc000d6c4a8] [0xc000717fd8 0xc000d6c288] [0xba70e0 0xba70e0] 0xc0000b84e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:32.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:32.892: INFO: rc: 1 Jun 29 14:09:32.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2150 exit status 1 true [0xc0000105f8 0xc000010d18 0xc0000116e0] [0xc0000105f8 0xc000010d18 0xc0000116e0] [0xc000010b08 0xc000011658] [0xba70e0 0xba70e0] 0xc001e3a000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:42.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:43.015: INFO: rc: 1 Jun 29 14:09:43.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028580f0 exit status 1 true [0xc0003d6118 0xc0003d6300 0xc0003d6410] [0xc0003d6118 0xc0003d6300 0xc0003d6410] [0xc0003d62c8 0xc0003d63b8] [0xba70e0 0xba70e0] 0xc0019be000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:09:53.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:09:53.112: INFO: rc: 1 Jun 29 14:09:53.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2270 exit status 1 true [0xc0000117f8 0xc0000119c8 0xc000011a18] [0xc0000117f8 0xc0000119c8 0xc000011a18] [0xc000011980 0xc000011a00] [0xba70e0 0xba70e0] 0xc001a7d5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:03.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:03.212: INFO: rc: 1 Jun 29 14:10:03.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eca1e0 exit status 1 true [0xc000d6c588 0xc000d6ca68 0xc000d6cbc0] [0xc000d6c588 0xc000d6ca68 0xc000d6cbc0] [0xc000d6c898 0xc000d6cb60] [0xba70e0 0xba70e0] 0xc00215bec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:13.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:13.314: INFO: rc: 1 Jun 29 14:10:13.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002124780 exit status 1 true [0xc0027e8000 0xc0027e8018 0xc0027e8030] [0xc0027e8000 0xc0027e8018 0xc0027e8030] [0xc0027e8010 0xc0027e8028] [0xba70e0 0xba70e0] 0xc001ac22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:23.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:23.409: INFO: rc: 1 Jun 29 14:10:23.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002124840 exit status 1 true [0xc0027e8038 0xc0027e8050 0xc0027e8068] [0xc0027e8038 0xc0027e8050 0xc0027e8068] [0xc0027e8048 0xc0027e8060] [0xba70e0 0xba70e0] 0xc001ac2600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:33.505: INFO: rc: 1 Jun 29 14:10:33.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026f2390 exit status 1 true [0xc000011a50 0xc000011ab8 0xc000011af8] [0xc000011a50 0xc000011ab8 0xc000011af8] [0xc000011a98 0xc000011ad8] [0xba70e0 0xba70e0] 0xc0017fe480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:43.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:43.611: INFO: rc: 1 Jun 29 14:10:43.611: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858240 exit status 1 true [0xc0003d6428 0xc0003d6488 0xc0003d6538] [0xc0003d6428 0xc0003d6488 0xc0003d6538] [0xc0003d6458 0xc0003d6528] [0xba70e0 0xba70e0] 0xc0020f0ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:10:53.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:10:53.708: INFO: rc: 1 Jun 29 14:10:53.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002124930 exit status 1 true [0xc0027e8078 0xc0027e80a8 0xc0027e8100] [0xc0027e8078 0xc0027e80a8 0xc0027e8100] [0xc0027e80a0 0xc0027e80f8] [0xba70e0 0xba70e0] 0xc001ac2960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:11:03.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:11:03.815: INFO: rc: 1 Jun 29 14:11:03.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021249f0 exit status 1 true [0xc0027e8108 0xc0027e8120 0xc0027e8138] [0xc0027e8108 0xc0027e8120 0xc0027e8138] [0xc0027e8118 0xc0027e8130] [0xba70e0 0xba70e0] 0xc001ac31a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:11:13.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:11:13.918: INFO: rc: 1 Jun 29 14:11:13.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002858450 exit status 1 true [0xc0003d6558 0xc0003d65f8 0xc0003d6658] [0xc0003d6558 0xc0003d65f8 0xc0003d6658] [0xc0003d65a8 0xc0003d6638] [0xba70e0 0xba70e0] 0xc0029f01e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 29 14:11:23.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:11:24.038: INFO: rc: 1 Jun 29 14:11:24.038: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jun 29 14:11:24.038: INFO: Scaling statefulset ss to 0 Jun 29 14:11:24.046: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 29 14:11:24.048: INFO: Deleting all statefulset in ns statefulset-8846 Jun 29 14:11:24.051: INFO: Scaling statefulset ss to 0 Jun 29 14:11:24.057: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:11:24.059: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:11:24.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8846" for this suite. Jun 29 14:11:30.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:11:30.201: INFO: namespace statefulset-8846 deletion completed in 6.123600293s • [SLOW TEST:374.532 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:11:30.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 29 14:11:30.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9505' Jun 29 14:11:30.587: INFO: stderr: "" Jun 29 14:11:30.587: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 29 14:11:31.663: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:11:31.663: INFO: Found 0 / 1 Jun 29 14:11:32.633: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:11:32.633: INFO: Found 0 / 1 Jun 29 14:11:33.592: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:11:33.593: INFO: Found 0 / 1 Jun 29 14:11:34.592: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:11:34.592: INFO: Found 1 / 1 Jun 29 14:11:34.592: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 29 14:11:34.596: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:11:34.596: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 29 14:11:34.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505' Jun 29 14:11:34.707: INFO: stderr: "" Jun 29 14:11:34.707: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Jun 14:11:33.332 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jun 14:11:33.332 # Server started, Redis version 3.2.12\n1:M 29 Jun 14:11:33.332 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jun 14:11:33.332 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 29 14:11:34.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505 --tail=1' Jun 29 14:11:34.821: INFO: stderr: "" Jun 29 14:11:34.821: INFO: stdout: "1:M 29 Jun 14:11:33.332 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 29 14:11:34.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505 --limit-bytes=1' Jun 29 14:11:34.939: INFO: stderr: "" Jun 29 14:11:34.939: INFO: stdout: " " STEP: exposing timestamps Jun 29 14:11:34.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505 --tail=1 --timestamps' Jun 29 14:11:35.058: INFO: stderr: "" Jun 29 14:11:35.058: INFO: stdout: "2020-06-29T14:11:33.332551557Z 1:M 29 Jun 14:11:33.332 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 29 14:11:37.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505 --since=1s' Jun 29 14:11:37.660: INFO: stderr: "" Jun 29 14:11:37.660: INFO: stdout: "" Jun 29 14:11:37.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7942 redis-master --namespace=kubectl-9505 --since=24h' Jun 29 14:11:37.757: INFO: stderr: "" Jun 29 14:11:37.757: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Jun 14:11:33.332 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jun 14:11:33.332 # Server started, Redis version 3.2.12\n1:M 29 Jun 14:11:33.332 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jun 14:11:33.332 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 29 14:11:37.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9505' Jun 29 14:11:37.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:11:37.848: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 29 14:11:37.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9505' Jun 29 14:11:37.939: INFO: stderr: "No resources found.\n" Jun 29 14:11:37.939: INFO: stdout: "" Jun 29 14:11:37.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9505 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 29 14:11:38.023: INFO: stderr: "" Jun 29 14:11:38.023: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:11:38.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9505" for this suite. Jun 29 14:11:44.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:11:44.116: INFO: namespace kubectl-9505 deletion completed in 6.089523257s • [SLOW TEST:13.914 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:11:44.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:11:44.155: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:11:45.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5078" for this suite. Jun 29 14:11:51.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:11:51.318: INFO: namespace custom-resource-definition-5078 deletion completed in 6.099052371s • [SLOW TEST:7.202 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:11:51.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 14:11:57.456: INFO: DNS probes using dns-test-18274302-b102-4cd4-80e0-2956b31099a6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 14:12:03.603: INFO: File wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:03.607: INFO: File jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:03.607: INFO: Lookups using dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 failed for: [wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local] Jun 29 14:12:08.614: INFO: File wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:08.618: INFO: File jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:08.618: INFO: Lookups using dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 failed for: [wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local] Jun 29 14:12:13.612: INFO: File wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:13.616: INFO: File jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:13.616: INFO: Lookups using dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 failed for: [wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local] Jun 29 14:12:18.613: INFO: File wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:18.616: INFO: File jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:18.616: INFO: Lookups using dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 failed for: [wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local] Jun 29 14:12:23.612: INFO: File wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:23.617: INFO: File jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local from pod dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 29 14:12:23.617: INFO: Lookups using dns-8677/dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 failed for: [wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local] Jun 29 14:12:28.617: INFO: DNS probes using dns-test-0409ac25-9a4f-48b7-9c94-66c98b756010 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8677.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8677.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 14:12:35.354: INFO: DNS probes using dns-test-5ea43797-f283-4759-84d2-2d416a9241ad succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:12:35.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8677" for this suite. Jun 29 14:12:41.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:12:41.581: INFO: namespace dns-8677 deletion completed in 6.114673604s • [SLOW TEST:50.263 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:12:41.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ba5c4074-90af-453f-a550-87bb5d698c63 STEP: Creating a pod to test consume secrets Jun 29 14:12:41.697: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f" in namespace "projected-2447" to be "success or failure" Jun 29 14:12:41.701: INFO: Pod "pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.818398ms Jun 29 14:12:43.705: INFO: Pod "pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007947264s Jun 29 14:12:45.710: INFO: Pod "pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012656851s STEP: Saw pod success Jun 29 14:12:45.710: INFO: Pod "pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f" satisfied condition "success or failure" Jun 29 14:12:45.713: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f container projected-secret-volume-test: STEP: delete the pod Jun 29 14:12:45.751: INFO: Waiting for pod pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f to disappear Jun 29 14:12:45.755: INFO: Pod pod-projected-secrets-9459dddc-f6d1-4fe8-8350-8aae00dff54f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:12:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2447" for this suite. Jun 29 14:12:51.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:12:51.866: INFO: namespace projected-2447 deletion completed in 6.107455442s • [SLOW TEST:10.284 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:12:51.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-644 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 29 14:12:51.915: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 29 14:13:14.046: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.109 8081 | grep -v '^\s*$'] Namespace:pod-network-test-644 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 14:13:14.046: INFO: >>> kubeConfig: /root/.kube/config I0629 14:13:14.099346 6 log.go:172] (0xc000a0e210) (0xc002104460) Create stream I0629 14:13:14.099394 6 log.go:172] (0xc000a0e210) (0xc002104460) Stream added, broadcasting: 1 I0629 14:13:14.101932 6 log.go:172] (0xc000a0e210) Reply frame received for 1 I0629 14:13:14.101971 6 log.go:172] (0xc000a0e210) (0xc000660820) Create stream I0629 14:13:14.101986 6 log.go:172] (0xc000a0e210) (0xc000660820) Stream added, broadcasting: 3 I0629 14:13:14.103053 6 log.go:172] (0xc000a0e210) Reply frame received for 3 I0629 14:13:14.103218 6 log.go:172] (0xc000a0e210) (0xc001228000) Create stream I0629 14:13:14.103254 6 log.go:172] (0xc000a0e210) (0xc001228000) Stream added, broadcasting: 5 I0629 14:13:14.104139 6 log.go:172] (0xc000a0e210) Reply frame received for 5 I0629 14:13:15.201592 6 log.go:172] (0xc000a0e210) Data frame received for 3 I0629 14:13:15.201643 6 log.go:172] (0xc000660820) (3) Data frame handling I0629 14:13:15.201679 6 log.go:172] (0xc000660820) (3) Data frame sent I0629 14:13:15.201807 6 log.go:172] (0xc000a0e210) Data frame received for 3 I0629 14:13:15.201830 6 log.go:172] (0xc000660820) (3) Data frame handling I0629 14:13:15.201889 6 log.go:172] (0xc000a0e210) Data frame received for 5 I0629 14:13:15.201913 6 log.go:172] (0xc001228000) (5) Data frame handling I0629 14:13:15.204315 6 log.go:172] (0xc000a0e210) Data frame received for 1 I0629 14:13:15.204345 6 log.go:172] (0xc002104460) (1) Data frame handling I0629 14:13:15.204375 6 log.go:172] (0xc002104460) (1) Data frame sent I0629 14:13:15.204394 6 log.go:172] (0xc000a0e210) (0xc002104460) Stream removed, broadcasting: 1 I0629 14:13:15.204502 6 log.go:172] (0xc000a0e210) (0xc002104460) Stream removed, broadcasting: 1 I0629 14:13:15.204536 6 log.go:172] (0xc000a0e210) (0xc000660820) Stream removed, broadcasting: 3 I0629 14:13:15.204828 6 log.go:172] (0xc000a0e210) (0xc001228000) Stream removed, broadcasting: 5 Jun 29 14:13:15.204: INFO: Found all expected endpoints: [netserver-0] I0629 14:13:15.204996 6 log.go:172] (0xc000a0e210) Go away received Jun 29 14:13:15.209: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.57 8081 | grep -v '^\s*$'] Namespace:pod-network-test-644 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 14:13:15.209: INFO: >>> kubeConfig: /root/.kube/config I0629 14:13:15.234818 6 log.go:172] (0xc001ea8370) (0xc000661400) Create stream I0629 14:13:15.234858 6 log.go:172] (0xc001ea8370) (0xc000661400) Stream added, broadcasting: 1 I0629 14:13:15.237573 6 log.go:172] (0xc001ea8370) Reply frame received for 1 I0629 14:13:15.237610 6 log.go:172] (0xc001ea8370) (0xc00021a140) Create stream I0629 14:13:15.237623 6 log.go:172] (0xc001ea8370) (0xc00021a140) Stream added, broadcasting: 3 I0629 14:13:15.238794 6 log.go:172] (0xc001ea8370) Reply frame received for 3 I0629 14:13:15.238860 6 log.go:172] (0xc001ea8370) (0xc0006614a0) Create stream I0629 14:13:15.238888 6 log.go:172] (0xc001ea8370) (0xc0006614a0) Stream added, broadcasting: 5 I0629 14:13:15.239951 6 log.go:172] (0xc001ea8370) Reply frame received for 5 I0629 14:13:16.298714 6 log.go:172] (0xc001ea8370) Data frame received for 3 I0629 14:13:16.298760 6 log.go:172] (0xc00021a140) (3) Data frame handling I0629 14:13:16.298825 6 log.go:172] (0xc00021a140) (3) Data frame sent I0629 14:13:16.299022 6 log.go:172] (0xc001ea8370) Data frame received for 3 I0629 14:13:16.299047 6 log.go:172] (0xc00021a140) (3) Data frame handling I0629 14:13:16.299073 6 log.go:172] (0xc001ea8370) Data frame received for 5 I0629 14:13:16.299095 6 log.go:172] (0xc0006614a0) (5) Data frame handling I0629 14:13:16.301634 6 log.go:172] (0xc001ea8370) Data frame received for 1 I0629 14:13:16.301709 6 log.go:172] (0xc000661400) (1) Data frame handling I0629 14:13:16.301739 6 log.go:172] (0xc000661400) (1) Data frame sent I0629 14:13:16.301760 6 log.go:172] (0xc001ea8370) (0xc000661400) Stream removed, broadcasting: 1 I0629 14:13:16.301777 6 log.go:172] (0xc001ea8370) Go away received I0629 14:13:16.301953 6 log.go:172] (0xc001ea8370) (0xc000661400) Stream removed, broadcasting: 1 I0629 14:13:16.301985 6 log.go:172] (0xc001ea8370) (0xc00021a140) Stream removed, broadcasting: 3 I0629 14:13:16.302002 6 log.go:172] (0xc001ea8370) (0xc0006614a0) Stream removed, broadcasting: 5 Jun 29 14:13:16.302: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:13:16.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-644" for this suite. Jun 29 14:13:40.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:13:40.394: INFO: namespace pod-network-test-644 deletion completed in 24.087698951s • [SLOW TEST:48.528 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:13:40.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 29 14:13:45.026: INFO: Successfully updated pod "labelsupdateba5a6a48-62ea-49a7-8908-6dd302c68940" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:13:49.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4703" for this suite. Jun 29 14:14:11.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:14:11.170: INFO: namespace projected-4703 deletion completed in 22.099522965s • [SLOW TEST:30.775 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:14:11.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-09c68e52-c5f7-43bb-8fed-21a607c24330 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:14:11.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4517" for this suite. Jun 29 14:14:17.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:14:17.349: INFO: namespace configmap-4517 deletion completed in 6.086180989s • [SLOW TEST:6.179 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:14:17.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 29 14:14:17.393: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 29 14:14:17.421: INFO: Waiting for terminating namespaces to be deleted... Jun 29 14:14:17.424: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 29 14:14:17.427: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.427: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 14:14:17.427: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.427: INFO: Container kindnet-cni ready: true, restart count 4 Jun 29 14:14:17.427: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 29 14:14:17.433: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.433: INFO: Container coredns ready: true, restart count 0 Jun 29 14:14:17.433: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.433: INFO: Container coredns ready: true, restart count 0 Jun 29 14:14:17.433: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.433: INFO: Container kube-proxy ready: true, restart count 0 Jun 29 14:14:17.433: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 29 14:14:17.433: INFO: Container kindnet-cni ready: true, restart count 4 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b699d793-6583-4919-95bd-a0c905b54c8b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b699d793-6583-4919-95bd-a0c905b54c8b off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b699d793-6583-4919-95bd-a0c905b54c8b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:14:25.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5412" for this suite. Jun 29 14:14:43.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:14:43.664: INFO: namespace sched-pred-5412 deletion completed in 18.112386869s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.315 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:14:43.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:14:43.710: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.293807ms) Jun 29 14:14:43.713: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.438111ms) Jun 29 14:14:43.717: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.176991ms) Jun 29 14:14:43.720: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.910068ms) Jun 29 14:14:43.723: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.254233ms) Jun 29 14:14:43.726: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.445759ms) Jun 29 14:14:43.730: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.527261ms) Jun 29 14:14:43.734: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.521044ms) Jun 29 14:14:43.737: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.857131ms) Jun 29 14:14:43.741: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.596941ms) Jun 29 14:14:43.744: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.328845ms) Jun 29 14:14:43.748: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.613527ms) Jun 29 14:14:43.768: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.892695ms) Jun 29 14:14:43.772: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.673826ms) Jun 29 14:14:43.775: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.297298ms) Jun 29 14:14:43.779: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.538032ms) Jun 29 14:14:43.782: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.448142ms) Jun 29 14:14:43.786: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.64095ms) Jun 29 14:14:43.789: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.650605ms) Jun 29 14:14:43.793: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.664894ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:14:43.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9549" for this suite. Jun 29 14:14:49.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:14:49.927: INFO: namespace proxy-9549 deletion completed in 6.129508425s • [SLOW TEST:6.261 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:14:49.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1830 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1830 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1830 Jun 29 14:14:50.012: INFO: Found 0 stateful pods, waiting for 1 Jun 29 14:15:00.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 29 14:15:00.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:15:00.300: INFO: stderr: "I0629 14:15:00.142784 2950 log.go:172] (0xc00089c630) (0xc0002e0b40) Create stream\nI0629 14:15:00.142848 2950 log.go:172] (0xc00089c630) (0xc0002e0b40) Stream added, broadcasting: 1\nI0629 14:15:00.146321 2950 log.go:172] (0xc00089c630) Reply frame received for 1\nI0629 14:15:00.146388 2950 log.go:172] (0xc00089c630) (0xc0002e0be0) Create stream\nI0629 14:15:00.146405 2950 log.go:172] (0xc00089c630) (0xc0002e0be0) Stream added, broadcasting: 3\nI0629 14:15:00.148158 2950 log.go:172] (0xc00089c630) Reply frame received for 3\nI0629 14:15:00.148193 2950 log.go:172] (0xc00089c630) (0xc0002e0280) Create stream\nI0629 14:15:00.148207 2950 log.go:172] (0xc00089c630) (0xc0002e0280) Stream added, broadcasting: 5\nI0629 14:15:00.149092 2950 log.go:172] (0xc00089c630) Reply frame received for 5\nI0629 14:15:00.240887 2950 log.go:172] (0xc00089c630) Data frame received for 5\nI0629 14:15:00.240919 2950 log.go:172] (0xc0002e0280) (5) Data frame handling\nI0629 14:15:00.240941 2950 log.go:172] (0xc0002e0280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:15:00.290793 2950 log.go:172] (0xc00089c630) Data frame received for 5\nI0629 14:15:00.290942 2950 log.go:172] (0xc0002e0280) (5) Data frame handling\nI0629 14:15:00.291001 2950 log.go:172] (0xc00089c630) Data frame received for 3\nI0629 14:15:00.291059 2950 log.go:172] (0xc0002e0be0) (3) Data frame handling\nI0629 14:15:00.291098 2950 log.go:172] (0xc0002e0be0) (3) Data frame sent\nI0629 14:15:00.291125 2950 log.go:172] (0xc00089c630) Data frame received for 3\nI0629 14:15:00.291141 2950 log.go:172] (0xc0002e0be0) (3) Data frame handling\nI0629 14:15:00.293076 2950 log.go:172] (0xc00089c630) Data frame received for 1\nI0629 14:15:00.293302 2950 log.go:172] (0xc0002e0b40) (1) Data frame handling\nI0629 14:15:00.293360 2950 log.go:172] (0xc0002e0b40) (1) Data frame sent\nI0629 14:15:00.293393 2950 log.go:172] (0xc00089c630) (0xc0002e0b40) Stream removed, broadcasting: 1\nI0629 14:15:00.293437 2950 log.go:172] (0xc00089c630) Go away received\nI0629 14:15:00.294025 2950 log.go:172] (0xc00089c630) (0xc0002e0b40) Stream removed, broadcasting: 1\nI0629 14:15:00.294049 2950 log.go:172] (0xc00089c630) (0xc0002e0be0) Stream removed, broadcasting: 3\nI0629 14:15:00.294061 2950 log.go:172] (0xc00089c630) (0xc0002e0280) Stream removed, broadcasting: 5\n" Jun 29 14:15:00.300: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:15:00.300: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:15:00.304: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 29 14:15:10.308: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:15:10.308: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:15:10.344: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999624s Jun 29 14:15:11.349: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974121331s Jun 29 14:15:12.353: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.969030885s Jun 29 14:15:13.358: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964692851s Jun 29 14:15:14.365: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.960105166s Jun 29 14:15:15.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.953064668s Jun 29 14:15:16.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.949015086s Jun 29 14:15:17.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.94493119s Jun 29 14:15:18.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.941013154s Jun 29 14:15:19.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 937.15054ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1830 Jun 29 14:15:20.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:15:20.644: INFO: stderr: "I0629 14:15:20.530728 2971 log.go:172] (0xc000116e70) (0xc000502640) Create stream\nI0629 14:15:20.530793 2971 log.go:172] (0xc000116e70) (0xc000502640) Stream added, broadcasting: 1\nI0629 14:15:20.532981 2971 log.go:172] (0xc000116e70) Reply frame received for 1\nI0629 14:15:20.533039 2971 log.go:172] (0xc000116e70) (0xc0005026e0) Create stream\nI0629 14:15:20.533054 2971 log.go:172] (0xc000116e70) (0xc0005026e0) Stream added, broadcasting: 3\nI0629 14:15:20.534457 2971 log.go:172] (0xc000116e70) Reply frame received for 3\nI0629 14:15:20.534492 2971 log.go:172] (0xc000116e70) (0xc000502780) Create stream\nI0629 14:15:20.534503 2971 log.go:172] (0xc000116e70) (0xc000502780) Stream added, broadcasting: 5\nI0629 14:15:20.535664 2971 log.go:172] (0xc000116e70) Reply frame received for 5\nI0629 14:15:20.635180 2971 log.go:172] (0xc000116e70) Data frame received for 3\nI0629 14:15:20.635217 2971 log.go:172] (0xc0005026e0) (3) Data frame handling\nI0629 14:15:20.635241 2971 log.go:172] (0xc0005026e0) (3) Data frame sent\nI0629 14:15:20.635255 2971 log.go:172] (0xc000116e70) Data frame received for 3\nI0629 14:15:20.635266 2971 log.go:172] (0xc0005026e0) (3) Data frame handling\nI0629 14:15:20.635355 2971 log.go:172] (0xc000116e70) Data frame received for 5\nI0629 14:15:20.635391 2971 log.go:172] (0xc000502780) (5) Data frame handling\nI0629 14:15:20.635418 2971 log.go:172] (0xc000502780) (5) Data frame sent\nI0629 14:15:20.635435 2971 log.go:172] (0xc000116e70) Data frame received for 5\nI0629 14:15:20.635443 2971 log.go:172] (0xc000502780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:15:20.637067 2971 log.go:172] (0xc000116e70) Data frame received for 1\nI0629 14:15:20.637085 2971 log.go:172] (0xc000502640) (1) Data frame handling\nI0629 14:15:20.637298 2971 log.go:172] (0xc000502640) (1) Data frame sent\nI0629 14:15:20.637332 2971 log.go:172] (0xc000116e70) (0xc000502640) Stream removed, broadcasting: 1\nI0629 14:15:20.637350 2971 log.go:172] (0xc000116e70) Go away received\nI0629 14:15:20.637738 2971 log.go:172] (0xc000116e70) (0xc000502640) Stream removed, broadcasting: 1\nI0629 14:15:20.637761 2971 log.go:172] (0xc000116e70) (0xc0005026e0) Stream removed, broadcasting: 3\nI0629 14:15:20.637771 2971 log.go:172] (0xc000116e70) (0xc000502780) Stream removed, broadcasting: 5\n" Jun 29 14:15:20.644: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:15:20.644: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:15:20.647: INFO: Found 1 stateful pods, waiting for 3 Jun 29 14:15:30.652: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 29 14:15:30.653: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 29 14:15:30.653: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 29 14:15:30.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:15:33.552: INFO: stderr: "I0629 14:15:33.468106 2991 log.go:172] (0xc000b1a420) (0xc0005fcb40) Create stream\nI0629 14:15:33.468140 2991 log.go:172] (0xc000b1a420) (0xc0005fcb40) Stream added, broadcasting: 1\nI0629 14:15:33.470240 2991 log.go:172] (0xc000b1a420) Reply frame received for 1\nI0629 14:15:33.470281 2991 log.go:172] (0xc000b1a420) (0xc0005fcbe0) Create stream\nI0629 14:15:33.470290 2991 log.go:172] (0xc000b1a420) (0xc0005fcbe0) Stream added, broadcasting: 3\nI0629 14:15:33.471231 2991 log.go:172] (0xc000b1a420) Reply frame received for 3\nI0629 14:15:33.471260 2991 log.go:172] (0xc000b1a420) (0xc0005fcc80) Create stream\nI0629 14:15:33.471269 2991 log.go:172] (0xc000b1a420) (0xc0005fcc80) Stream added, broadcasting: 5\nI0629 14:15:33.472019 2991 log.go:172] (0xc000b1a420) Reply frame received for 5\nI0629 14:15:33.543811 2991 log.go:172] (0xc000b1a420) Data frame received for 5\nI0629 14:15:33.543861 2991 log.go:172] (0xc0005fcc80) (5) Data frame handling\nI0629 14:15:33.543878 2991 log.go:172] (0xc0005fcc80) (5) Data frame sent\nI0629 14:15:33.543907 2991 log.go:172] (0xc000b1a420) Data frame received for 5\nI0629 14:15:33.543936 2991 log.go:172] (0xc0005fcc80) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:15:33.543961 2991 log.go:172] (0xc000b1a420) Data frame received for 3\nI0629 14:15:33.543974 2991 log.go:172] (0xc0005fcbe0) (3) Data frame handling\nI0629 14:15:33.543998 2991 log.go:172] (0xc0005fcbe0) (3) Data frame sent\nI0629 14:15:33.544025 2991 log.go:172] (0xc000b1a420) Data frame received for 3\nI0629 14:15:33.544042 2991 log.go:172] (0xc0005fcbe0) (3) Data frame handling\nI0629 14:15:33.545925 2991 log.go:172] (0xc000b1a420) Data frame received for 1\nI0629 14:15:33.545962 2991 log.go:172] (0xc0005fcb40) (1) Data frame handling\nI0629 14:15:33.545990 2991 log.go:172] (0xc0005fcb40) (1) Data frame sent\nI0629 14:15:33.546018 2991 log.go:172] (0xc000b1a420) (0xc0005fcb40) Stream removed, broadcasting: 1\nI0629 14:15:33.546038 2991 log.go:172] (0xc000b1a420) Go away received\nI0629 14:15:33.546365 2991 log.go:172] (0xc000b1a420) (0xc0005fcb40) Stream removed, broadcasting: 1\nI0629 14:15:33.546387 2991 log.go:172] (0xc000b1a420) (0xc0005fcbe0) Stream removed, broadcasting: 3\nI0629 14:15:33.546397 2991 log.go:172] (0xc000b1a420) (0xc0005fcc80) Stream removed, broadcasting: 5\n" Jun 29 14:15:33.552: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:15:33.552: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:15:33.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:15:33.832: INFO: stderr: "I0629 14:15:33.678166 3024 log.go:172] (0xc0001168f0) (0xc000550820) Create stream\nI0629 14:15:33.678210 3024 log.go:172] (0xc0001168f0) (0xc000550820) Stream added, broadcasting: 1\nI0629 14:15:33.680043 3024 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0629 14:15:33.680103 3024 log.go:172] (0xc0001168f0) (0xc0009fe000) Create stream\nI0629 14:15:33.680126 3024 log.go:172] (0xc0001168f0) (0xc0009fe000) Stream added, broadcasting: 3\nI0629 14:15:33.681087 3024 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0629 14:15:33.681250 3024 log.go:172] (0xc0001168f0) (0xc00080a000) Create stream\nI0629 14:15:33.681271 3024 log.go:172] (0xc0001168f0) (0xc00080a000) Stream added, broadcasting: 5\nI0629 14:15:33.682156 3024 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0629 14:15:33.752738 3024 log.go:172] (0xc0001168f0) Data frame received for 5\nI0629 14:15:33.752786 3024 log.go:172] (0xc00080a000) (5) Data frame handling\nI0629 14:15:33.752815 3024 log.go:172] (0xc00080a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:15:33.822340 3024 log.go:172] (0xc0001168f0) Data frame received for 3\nI0629 14:15:33.822371 3024 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0629 14:15:33.822386 3024 log.go:172] (0xc0009fe000) (3) Data frame sent\nI0629 14:15:33.822883 3024 log.go:172] (0xc0001168f0) Data frame received for 5\nI0629 14:15:33.822917 3024 log.go:172] (0xc00080a000) (5) Data frame handling\nI0629 14:15:33.823034 3024 log.go:172] (0xc0001168f0) Data frame received for 3\nI0629 14:15:33.823049 3024 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0629 14:15:33.824829 3024 log.go:172] (0xc0001168f0) Data frame received for 1\nI0629 14:15:33.824844 3024 log.go:172] (0xc000550820) (1) Data frame handling\nI0629 14:15:33.824852 3024 log.go:172] (0xc000550820) (1) Data frame sent\nI0629 14:15:33.824870 3024 log.go:172] (0xc0001168f0) (0xc000550820) Stream removed, broadcasting: 1\nI0629 14:15:33.824913 3024 log.go:172] (0xc0001168f0) Go away received\nI0629 14:15:33.825304 3024 log.go:172] (0xc0001168f0) (0xc000550820) Stream removed, broadcasting: 1\nI0629 14:15:33.825324 3024 log.go:172] (0xc0001168f0) (0xc0009fe000) Stream removed, broadcasting: 3\nI0629 14:15:33.825331 3024 log.go:172] (0xc0001168f0) (0xc00080a000) Stream removed, broadcasting: 5\n" Jun 29 14:15:33.833: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:15:33.833: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:15:33.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 29 14:15:34.058: INFO: stderr: "I0629 14:15:33.966794 3044 log.go:172] (0xc000988370) (0xc0007b2640) Create stream\nI0629 14:15:33.966845 3044 log.go:172] (0xc000988370) (0xc0007b2640) Stream added, broadcasting: 1\nI0629 14:15:33.968515 3044 log.go:172] (0xc000988370) Reply frame received for 1\nI0629 14:15:33.968543 3044 log.go:172] (0xc000988370) (0xc0008ce000) Create stream\nI0629 14:15:33.968550 3044 log.go:172] (0xc000988370) (0xc0008ce000) Stream added, broadcasting: 3\nI0629 14:15:33.969519 3044 log.go:172] (0xc000988370) Reply frame received for 3\nI0629 14:15:33.969551 3044 log.go:172] (0xc000988370) (0xc0008ce0a0) Create stream\nI0629 14:15:33.969559 3044 log.go:172] (0xc000988370) (0xc0008ce0a0) Stream added, broadcasting: 5\nI0629 14:15:33.970338 3044 log.go:172] (0xc000988370) Reply frame received for 5\nI0629 14:15:34.019681 3044 log.go:172] (0xc000988370) Data frame received for 5\nI0629 14:15:34.019710 3044 log.go:172] (0xc0008ce0a0) (5) Data frame handling\nI0629 14:15:34.019730 3044 log.go:172] (0xc0008ce0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0629 14:15:34.050757 3044 log.go:172] (0xc000988370) Data frame received for 3\nI0629 14:15:34.050786 3044 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0629 14:15:34.050809 3044 log.go:172] (0xc0008ce000) (3) Data frame sent\nI0629 14:15:34.050819 3044 log.go:172] (0xc000988370) Data frame received for 3\nI0629 14:15:34.050827 3044 log.go:172] (0xc0008ce000) (3) Data frame handling\nI0629 14:15:34.051095 3044 log.go:172] (0xc000988370) Data frame received for 5\nI0629 14:15:34.051130 3044 log.go:172] (0xc0008ce0a0) (5) Data frame handling\nI0629 14:15:34.053293 3044 log.go:172] (0xc000988370) Data frame received for 1\nI0629 14:15:34.053330 3044 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0629 14:15:34.053346 3044 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0629 14:15:34.053366 3044 log.go:172] (0xc000988370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0629 14:15:34.053385 3044 log.go:172] (0xc000988370) Go away received\nI0629 14:15:34.054154 3044 log.go:172] (0xc000988370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0629 14:15:34.054205 3044 log.go:172] (0xc000988370) (0xc0008ce000) Stream removed, broadcasting: 3\nI0629 14:15:34.054226 3044 log.go:172] (0xc000988370) (0xc0008ce0a0) Stream removed, broadcasting: 5\n" Jun 29 14:15:34.058: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 29 14:15:34.058: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 29 14:15:34.058: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:15:34.062: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jun 29 14:15:44.071: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:15:44.071: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:15:44.071: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 29 14:15:44.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999653s Jun 29 14:15:45.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994426728s Jun 29 14:15:46.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990083123s Jun 29 14:15:47.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985025326s Jun 29 14:15:48.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978867879s Jun 29 14:15:49.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973313704s Jun 29 14:15:50.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967457101s Jun 29 14:15:51.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963043735s Jun 29 14:15:52.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957659595s Jun 29 14:15:53.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.080923ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1830 Jun 29 14:15:54.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:15:54.378: INFO: stderr: "I0629 14:15:54.268374 3063 log.go:172] (0xc00073a420) (0xc0006888c0) Create stream\nI0629 14:15:54.268435 3063 log.go:172] (0xc00073a420) (0xc0006888c0) Stream added, broadcasting: 1\nI0629 14:15:54.271059 3063 log.go:172] (0xc00073a420) Reply frame received for 1\nI0629 14:15:54.271104 3063 log.go:172] (0xc00073a420) (0xc000536820) Create stream\nI0629 14:15:54.271123 3063 log.go:172] (0xc00073a420) (0xc000536820) Stream added, broadcasting: 3\nI0629 14:15:54.272178 3063 log.go:172] (0xc00073a420) Reply frame received for 3\nI0629 14:15:54.272206 3063 log.go:172] (0xc00073a420) (0xc0005368c0) Create stream\nI0629 14:15:54.272216 3063 log.go:172] (0xc00073a420) (0xc0005368c0) Stream added, broadcasting: 5\nI0629 14:15:54.273414 3063 log.go:172] (0xc00073a420) Reply frame received for 5\nI0629 14:15:54.368651 3063 log.go:172] (0xc00073a420) Data frame received for 3\nI0629 14:15:54.368686 3063 log.go:172] (0xc000536820) (3) Data frame handling\nI0629 14:15:54.368708 3063 log.go:172] (0xc000536820) (3) Data frame sent\nI0629 14:15:54.368722 3063 log.go:172] (0xc00073a420) Data frame received for 3\nI0629 14:15:54.368737 3063 log.go:172] (0xc000536820) (3) Data frame handling\nI0629 14:15:54.368986 3063 log.go:172] (0xc00073a420) Data frame received for 5\nI0629 14:15:54.369019 3063 log.go:172] (0xc0005368c0) (5) Data frame handling\nI0629 14:15:54.369038 3063 log.go:172] (0xc0005368c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:15:54.369389 3063 log.go:172] (0xc00073a420) Data frame received for 5\nI0629 14:15:54.369422 3063 log.go:172] (0xc0005368c0) (5) Data frame handling\nI0629 14:15:54.371100 3063 log.go:172] (0xc00073a420) Data frame received for 1\nI0629 14:15:54.371122 3063 log.go:172] (0xc0006888c0) (1) Data frame handling\nI0629 14:15:54.371142 3063 log.go:172] (0xc0006888c0) (1) Data frame sent\nI0629 14:15:54.371155 3063 log.go:172] (0xc00073a420) (0xc0006888c0) Stream removed, broadcasting: 1\nI0629 14:15:54.371176 3063 log.go:172] (0xc00073a420) Go away received\nI0629 14:15:54.371587 3063 log.go:172] (0xc00073a420) (0xc0006888c0) Stream removed, broadcasting: 1\nI0629 14:15:54.371620 3063 log.go:172] (0xc00073a420) (0xc000536820) Stream removed, broadcasting: 3\nI0629 14:15:54.371639 3063 log.go:172] (0xc00073a420) (0xc0005368c0) Stream removed, broadcasting: 5\n" Jun 29 14:15:54.378: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:15:54.378: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:15:54.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:15:54.562: INFO: stderr: "I0629 14:15:54.492299 3084 log.go:172] (0xc00098e370) (0xc0008e4640) Create stream\nI0629 14:15:54.492382 3084 log.go:172] (0xc00098e370) (0xc0008e4640) Stream added, broadcasting: 1\nI0629 14:15:54.494968 3084 log.go:172] (0xc00098e370) Reply frame received for 1\nI0629 14:15:54.495015 3084 log.go:172] (0xc00098e370) (0xc00042c000) Create stream\nI0629 14:15:54.495035 3084 log.go:172] (0xc00098e370) (0xc00042c000) Stream added, broadcasting: 3\nI0629 14:15:54.495954 3084 log.go:172] (0xc00098e370) Reply frame received for 3\nI0629 14:15:54.495979 3084 log.go:172] (0xc00098e370) (0xc0008e46e0) Create stream\nI0629 14:15:54.495987 3084 log.go:172] (0xc00098e370) (0xc0008e46e0) Stream added, broadcasting: 5\nI0629 14:15:54.496966 3084 log.go:172] (0xc00098e370) Reply frame received for 5\nI0629 14:15:54.555136 3084 log.go:172] (0xc00098e370) Data frame received for 3\nI0629 14:15:54.555180 3084 log.go:172] (0xc00042c000) (3) Data frame handling\nI0629 14:15:54.555216 3084 log.go:172] (0xc00042c000) (3) Data frame sent\nI0629 14:15:54.555234 3084 log.go:172] (0xc00098e370) Data frame received for 3\nI0629 14:15:54.555247 3084 log.go:172] (0xc00042c000) (3) Data frame handling\nI0629 14:15:54.555273 3084 log.go:172] (0xc00098e370) Data frame received for 5\nI0629 14:15:54.555287 3084 log.go:172] (0xc0008e46e0) (5) Data frame handling\nI0629 14:15:54.555302 3084 log.go:172] (0xc0008e46e0) (5) Data frame sent\nI0629 14:15:54.555316 3084 log.go:172] (0xc00098e370) Data frame received for 5\nI0629 14:15:54.555329 3084 log.go:172] (0xc0008e46e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:15:54.556974 3084 log.go:172] (0xc00098e370) Data frame received for 1\nI0629 14:15:54.556996 3084 log.go:172] (0xc0008e4640) (1) Data frame handling\nI0629 14:15:54.557006 3084 log.go:172] (0xc0008e4640) (1) Data frame sent\nI0629 14:15:54.557022 3084 log.go:172] (0xc00098e370) (0xc0008e4640) Stream removed, broadcasting: 1\nI0629 14:15:54.557042 3084 log.go:172] (0xc00098e370) Go away received\nI0629 14:15:54.557388 3084 log.go:172] (0xc00098e370) (0xc0008e4640) Stream removed, broadcasting: 1\nI0629 14:15:54.557402 3084 log.go:172] (0xc00098e370) (0xc00042c000) Stream removed, broadcasting: 3\nI0629 14:15:54.557408 3084 log.go:172] (0xc00098e370) (0xc0008e46e0) Stream removed, broadcasting: 5\n" Jun 29 14:15:54.563: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:15:54.563: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:15:54.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1830 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 29 14:15:54.762: INFO: stderr: "I0629 14:15:54.685551 3104 log.go:172] (0xc0009442c0) (0xc00075a640) Create stream\nI0629 14:15:54.685626 3104 log.go:172] (0xc0009442c0) (0xc00075a640) Stream added, broadcasting: 1\nI0629 14:15:54.687545 3104 log.go:172] (0xc0009442c0) Reply frame received for 1\nI0629 14:15:54.687595 3104 log.go:172] (0xc0009442c0) (0xc000884000) Create stream\nI0629 14:15:54.687608 3104 log.go:172] (0xc0009442c0) (0xc000884000) Stream added, broadcasting: 3\nI0629 14:15:54.688417 3104 log.go:172] (0xc0009442c0) Reply frame received for 3\nI0629 14:15:54.688451 3104 log.go:172] (0xc0009442c0) (0xc00075a6e0) Create stream\nI0629 14:15:54.688471 3104 log.go:172] (0xc0009442c0) (0xc00075a6e0) Stream added, broadcasting: 5\nI0629 14:15:54.689328 3104 log.go:172] (0xc0009442c0) Reply frame received for 5\nI0629 14:15:54.755883 3104 log.go:172] (0xc0009442c0) Data frame received for 3\nI0629 14:15:54.755919 3104 log.go:172] (0xc000884000) (3) Data frame handling\nI0629 14:15:54.755931 3104 log.go:172] (0xc000884000) (3) Data frame sent\nI0629 14:15:54.755945 3104 log.go:172] (0xc0009442c0) Data frame received for 3\nI0629 14:15:54.755953 3104 log.go:172] (0xc000884000) (3) Data frame handling\nI0629 14:15:54.755998 3104 log.go:172] (0xc0009442c0) Data frame received for 5\nI0629 14:15:54.756020 3104 log.go:172] (0xc00075a6e0) (5) Data frame handling\nI0629 14:15:54.756038 3104 log.go:172] (0xc00075a6e0) (5) Data frame sent\nI0629 14:15:54.756050 3104 log.go:172] (0xc0009442c0) Data frame received for 5\nI0629 14:15:54.756059 3104 log.go:172] (0xc00075a6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0629 14:15:54.757165 3104 log.go:172] (0xc0009442c0) Data frame received for 1\nI0629 14:15:54.757179 3104 log.go:172] (0xc00075a640) (1) Data frame handling\nI0629 14:15:54.757187 3104 log.go:172] (0xc00075a640) (1) Data frame sent\nI0629 14:15:54.757402 3104 log.go:172] (0xc0009442c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0629 14:15:54.757543 3104 log.go:172] (0xc0009442c0) Go away received\nI0629 14:15:54.757789 3104 log.go:172] (0xc0009442c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0629 14:15:54.757810 3104 log.go:172] (0xc0009442c0) (0xc000884000) Stream removed, broadcasting: 3\nI0629 14:15:54.757890 3104 log.go:172] (0xc0009442c0) (0xc00075a6e0) Stream removed, broadcasting: 5\n" Jun 29 14:15:54.762: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 29 14:15:54.762: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 29 14:15:54.762: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 29 14:16:14.781: INFO: Deleting all statefulset in ns statefulset-1830 Jun 29 14:16:14.784: INFO: Scaling statefulset ss to 0 Jun 29 14:16:14.794: INFO: Waiting for statefulset status.replicas updated to 0 Jun 29 14:16:14.797: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:16:14.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1830" for this suite. Jun 29 14:16:20.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:16:20.902: INFO: namespace statefulset-1830 deletion completed in 6.089156196s • [SLOW TEST:90.975 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:16:20.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 14:16:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4845' Jun 29 14:16:21.044: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 29 14:16:21.044: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 29 14:16:23.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4845' Jun 29 14:16:23.275: INFO: stderr: "" Jun 29 14:16:23.275: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:16:23.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4845" for this suite. Jun 29 14:16:29.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:16:29.418: INFO: namespace kubectl-4845 deletion completed in 6.135980806s • [SLOW TEST:8.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:16:29.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-f3707974-79eb-4ca0-ab15-7d98217e0bd3 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:16:29.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8461" for this suite. Jun 29 14:16:35.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:16:35.620: INFO: namespace secrets-8461 deletion completed in 6.133804951s • [SLOW TEST:6.201 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:16:35.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 29 14:16:35.680: INFO: Waiting up to 5m0s for pod "downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91" in namespace "downward-api-9252" to be "success or failure" Jun 29 14:16:35.710: INFO: Pod "downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91": Phase="Pending", Reason="", readiness=false. Elapsed: 30.675704ms Jun 29 14:16:37.889: INFO: Pod "downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20967478s Jun 29 14:16:39.893: INFO: Pod "downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213583705s STEP: Saw pod success Jun 29 14:16:39.893: INFO: Pod "downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91" satisfied condition "success or failure" Jun 29 14:16:39.896: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91 container dapi-container: STEP: delete the pod Jun 29 14:16:39.961: INFO: Waiting for pod downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91 to disappear Jun 29 14:16:39.971: INFO: Pod downward-api-3ab63dcd-3702-4f50-b1e4-9d60dc06cf91 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:16:39.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9252" for this suite. Jun 29 14:16:45.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:16:46.065: INFO: namespace downward-api-9252 deletion completed in 6.089821183s • [SLOW TEST:10.445 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:16:46.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-8l4h STEP: Creating a pod to test atomic-volume-subpath Jun 29 14:16:46.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8l4h" in namespace "subpath-6425" to be "success or failure" Jun 29 14:16:46.168: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Pending", Reason="", readiness=false. Elapsed: 3.610652ms Jun 29 14:16:48.173: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008425701s Jun 29 14:16:50.176: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 4.011478896s Jun 29 14:16:52.180: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 6.015447877s Jun 29 14:16:54.184: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 8.019230821s Jun 29 14:16:56.188: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 10.023670372s Jun 29 14:16:58.194: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 12.028999875s Jun 29 14:17:00.198: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 14.032976334s Jun 29 14:17:02.201: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 16.036776111s Jun 29 14:17:04.206: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 18.041027859s Jun 29 14:17:06.210: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 20.045740365s Jun 29 14:17:08.215: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 22.050359276s Jun 29 14:17:10.219: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Running", Reason="", readiness=true. Elapsed: 24.0542355s Jun 29 14:17:12.231: INFO: Pod "pod-subpath-test-secret-8l4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.066128367s STEP: Saw pod success Jun 29 14:17:12.231: INFO: Pod "pod-subpath-test-secret-8l4h" satisfied condition "success or failure" Jun 29 14:17:12.234: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-8l4h container test-container-subpath-secret-8l4h: STEP: delete the pod Jun 29 14:17:12.278: INFO: Waiting for pod pod-subpath-test-secret-8l4h to disappear Jun 29 14:17:12.288: INFO: Pod pod-subpath-test-secret-8l4h no longer exists STEP: Deleting pod pod-subpath-test-secret-8l4h Jun 29 14:17:12.288: INFO: Deleting pod "pod-subpath-test-secret-8l4h" in namespace "subpath-6425" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:17:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6425" for this suite. Jun 29 14:17:18.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:17:18.395: INFO: namespace subpath-6425 deletion completed in 6.099560949s • [SLOW TEST:32.330 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:17:18.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:17:18.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 29 14:17:18.583: INFO: stderr: "" Jun 29 14:17:18.584: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:08:14Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:17:18.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9011" for this suite. Jun 29 14:17:24.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:17:24.683: INFO: namespace kubectl-9011 deletion completed in 6.093344265s • [SLOW TEST:6.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:17:24.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 29 14:17:29.294: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2315 pod-service-account-47c8c0a4-2e92-407d-bc8d-bd7b78f55286 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 29 14:17:29.523: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2315 pod-service-account-47c8c0a4-2e92-407d-bc8d-bd7b78f55286 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 29 14:17:29.709: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2315 pod-service-account-47c8c0a4-2e92-407d-bc8d-bd7b78f55286 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:17:29.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2315" for this suite. Jun 29 14:17:35.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:17:36.026: INFO: namespace svcaccounts-2315 deletion completed in 6.124424827s • [SLOW TEST:11.343 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:17:36.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-876958c7-050b-41dc-8751-768964853b4a STEP: Creating a pod to test consume secrets Jun 29 14:17:36.104: INFO: Waiting up to 5m0s for pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3" in namespace "secrets-7499" to be "success or failure" Jun 29 14:17:36.108: INFO: Pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799723ms Jun 29 14:17:38.133: INFO: Pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028878365s Jun 29 14:17:40.138: INFO: Pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3": Phase="Running", Reason="", readiness=true. Elapsed: 4.033431492s Jun 29 14:17:42.142: INFO: Pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037604956s STEP: Saw pod success Jun 29 14:17:42.142: INFO: Pod "pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3" satisfied condition "success or failure" Jun 29 14:17:42.145: INFO: Trying to get logs from node iruya-worker pod pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3 container secret-env-test: STEP: delete the pod Jun 29 14:17:42.179: INFO: Waiting for pod pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3 to disappear Jun 29 14:17:42.192: INFO: Pod pod-secrets-18019f3d-54d2-4155-8e5c-8ae82f3d6ae3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:17:42.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7499" for this suite. Jun 29 14:17:48.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:17:48.325: INFO: namespace secrets-7499 deletion completed in 6.129604827s • [SLOW TEST:12.299 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:17:48.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:17:48.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b" in namespace "downward-api-2829" to be "success or failure" Jun 29 14:17:48.405: INFO: Pod "downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.668874ms Jun 29 14:17:50.410: INFO: Pod "downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047272751s Jun 29 14:17:52.415: INFO: Pod "downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052264117s STEP: Saw pod success Jun 29 14:17:52.415: INFO: Pod "downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b" satisfied condition "success or failure" Jun 29 14:17:52.418: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b container client-container: STEP: delete the pod Jun 29 14:17:52.461: INFO: Waiting for pod downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b to disappear Jun 29 14:17:52.474: INFO: Pod downwardapi-volume-cd81323d-8447-4416-bfc4-28121be8764b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:17:52.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2829" for this suite. Jun 29 14:17:58.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:17:58.584: INFO: namespace downward-api-2829 deletion completed in 6.107650551s • [SLOW TEST:10.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:17:58.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:17:58.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21" in namespace "projected-4738" to be "success or failure" Jun 29 14:17:58.717: INFO: Pod "downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21": Phase="Pending", Reason="", readiness=false. Elapsed: 60.140956ms Jun 29 14:18:00.722: INFO: Pod "downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06480659s Jun 29 14:18:02.726: INFO: Pod "downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068960404s STEP: Saw pod success Jun 29 14:18:02.726: INFO: Pod "downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21" satisfied condition "success or failure" Jun 29 14:18:02.729: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21 container client-container: STEP: delete the pod Jun 29 14:18:02.752: INFO: Waiting for pod downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21 to disappear Jun 29 14:18:02.756: INFO: Pod downwardapi-volume-df6058c7-8f03-491f-80da-8fa9c96e4b21 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:18:02.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4738" for this suite. Jun 29 14:18:08.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:18:08.843: INFO: namespace projected-4738 deletion completed in 6.083702671s • [SLOW TEST:10.259 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:18:08.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-gw2h STEP: Creating a pod to test atomic-volume-subpath Jun 29 14:18:08.915: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gw2h" in namespace "subpath-9198" to be "success or failure" Jun 29 14:18:08.934: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Pending", Reason="", readiness=false. Elapsed: 18.815648ms Jun 29 14:18:10.939: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024437462s Jun 29 14:18:12.943: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 4.027928595s Jun 29 14:18:14.947: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 6.032130649s Jun 29 14:18:16.951: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 8.036572595s Jun 29 14:18:18.956: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 10.041395031s Jun 29 14:18:20.960: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 12.045575525s Jun 29 14:18:22.965: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 14.050382798s Jun 29 14:18:24.970: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 16.054774288s Jun 29 14:18:26.976: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 18.061144463s Jun 29 14:18:28.980: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 20.065551351s Jun 29 14:18:30.985: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Running", Reason="", readiness=true. Elapsed: 22.070383064s Jun 29 14:18:32.989: INFO: Pod "pod-subpath-test-configmap-gw2h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074276274s STEP: Saw pod success Jun 29 14:18:32.989: INFO: Pod "pod-subpath-test-configmap-gw2h" satisfied condition "success or failure" Jun 29 14:18:32.991: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-gw2h container test-container-subpath-configmap-gw2h: STEP: delete the pod Jun 29 14:18:33.010: INFO: Waiting for pod pod-subpath-test-configmap-gw2h to disappear Jun 29 14:18:33.013: INFO: Pod pod-subpath-test-configmap-gw2h no longer exists STEP: Deleting pod pod-subpath-test-configmap-gw2h Jun 29 14:18:33.013: INFO: Deleting pod "pod-subpath-test-configmap-gw2h" in namespace "subpath-9198" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:18:33.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9198" for this suite. Jun 29 14:18:39.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:18:39.118: INFO: namespace subpath-9198 deletion completed in 6.099300138s • [SLOW TEST:30.274 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:18:39.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 14:18:39.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3847' Jun 29 14:18:39.268: INFO: stderr: "" Jun 29 14:18:39.268: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 29 14:18:39.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3847' Jun 29 14:18:52.169: INFO: stderr: "" Jun 29 14:18:52.169: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:18:52.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3847" for this suite. Jun 29 14:18:58.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:18:58.339: INFO: namespace kubectl-3847 deletion completed in 6.138991947s • [SLOW TEST:19.220 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:18:58.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 65.135.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.135.65_udp@PTR;check="$$(dig +tcp +noall +answer +search 65.135.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.135.65_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 65.135.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.135.65_udp@PTR;check="$$(dig +tcp +noall +answer +search 65.135.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.135.65_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 29 14:19:04.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.507: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.510: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.513: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.534: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.537: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.540: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.543: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:04.561: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:09.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.573: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.577: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.628: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.631: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.633: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.636: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:09.654: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:14.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.570: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.573: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.596: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.605: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:14.623: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:19.580: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.586: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.589: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.609: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.614: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:19.634: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:24.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.572: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.575: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.597: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:24.625: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:29.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.572: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.594: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.597: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.655: INFO: Unable to read jessie_udp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.658: INFO: Unable to read jessie_tcp@dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.661: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.664: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local from pod dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2: the server could not find the requested resource (get pods dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2) Jun 29 14:19:29.680: INFO: Lookups using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 failed for: [wheezy_udp@dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@dns-test-service.dns-5997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_udp@dns-test-service.dns-5997.svc.cluster.local jessie_tcp@dns-test-service.dns-5997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5997.svc.cluster.local] Jun 29 14:19:34.655: INFO: DNS probes using dns-5997/dns-test-4ee69296-6a69-46e6-b231-835f88cb94b2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:19:34.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5997" for this suite. Jun 29 14:19:40.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:19:40.931: INFO: namespace dns-5997 deletion completed in 6.118158531s • [SLOW TEST:42.592 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:19:40.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 29 14:19:41.019: INFO: Waiting up to 5m0s for pod "pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2" in namespace "emptydir-195" to be "success or failure" Jun 29 14:19:41.023: INFO: Pod "pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.471591ms Jun 29 14:19:43.026: INFO: Pod "pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007301107s Jun 29 14:19:45.030: INFO: Pod "pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011406045s STEP: Saw pod success Jun 29 14:19:45.031: INFO: Pod "pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2" satisfied condition "success or failure" Jun 29 14:19:45.034: INFO: Trying to get logs from node iruya-worker pod pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2 container test-container: STEP: delete the pod Jun 29 14:19:45.060: INFO: Waiting for pod pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2 to disappear Jun 29 14:19:45.069: INFO: Pod pod-dbed1936-f569-4c1d-aec0-cdff4aa297b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:19:45.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-195" for this suite. Jun 29 14:19:51.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:19:51.158: INFO: namespace emptydir-195 deletion completed in 6.086090061s • [SLOW TEST:10.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:19:51.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d0ce547b-f07a-49a1-a4c9-cd9fd80fb5d3 STEP: Creating a pod to test consume configMaps Jun 29 14:19:51.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387" in namespace "configmap-3909" to be "success or failure" Jun 29 14:19:51.242: INFO: Pod "pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387": Phase="Pending", Reason="", readiness=false. Elapsed: 3.333999ms Jun 29 14:19:53.246: INFO: Pod "pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006823918s Jun 29 14:19:55.250: INFO: Pod "pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011160759s STEP: Saw pod success Jun 29 14:19:55.250: INFO: Pod "pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387" satisfied condition "success or failure" Jun 29 14:19:55.253: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387 container configmap-volume-test: STEP: delete the pod Jun 29 14:19:55.275: INFO: Waiting for pod pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387 to disappear Jun 29 14:19:55.284: INFO: Pod pod-configmaps-407bfea7-5cac-4f58-8b76-9a15d3a9f387 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:19:55.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3909" for this suite. Jun 29 14:20:01.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:20:01.384: INFO: namespace configmap-3909 deletion completed in 6.095882837s • [SLOW TEST:10.225 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:20:01.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 29 14:20:01.464: INFO: Waiting up to 5m0s for pod "downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7" in namespace "downward-api-50" to be "success or failure" Jun 29 14:20:01.470: INFO: Pod "downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852427ms Jun 29 14:20:03.475: INFO: Pod "downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011333518s Jun 29 14:20:05.479: INFO: Pod "downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015031676s STEP: Saw pod success Jun 29 14:20:05.479: INFO: Pod "downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7" satisfied condition "success or failure" Jun 29 14:20:05.481: INFO: Trying to get logs from node iruya-worker2 pod downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7 container dapi-container: STEP: delete the pod Jun 29 14:20:05.509: INFO: Waiting for pod downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7 to disappear Jun 29 14:20:05.568: INFO: Pod downward-api-cdf8e77f-2a47-4ba7-a38f-686e10ac15f7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:20:05.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-50" for this suite. Jun 29 14:20:11.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:20:11.674: INFO: namespace downward-api-50 deletion completed in 6.102458656s • [SLOW TEST:10.290 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:20:11.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:20:11.783: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 29 14:20:11.802: INFO: Number of nodes with available pods: 0 Jun 29 14:20:11.802: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 29 14:20:11.845: INFO: Number of nodes with available pods: 0 Jun 29 14:20:11.846: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:12.898: INFO: Number of nodes with available pods: 0 Jun 29 14:20:12.898: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:13.850: INFO: Number of nodes with available pods: 0 Jun 29 14:20:13.850: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:14.860: INFO: Number of nodes with available pods: 1 Jun 29 14:20:14.860: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 29 14:20:14.891: INFO: Number of nodes with available pods: 1 Jun 29 14:20:14.891: INFO: Number of running nodes: 0, number of available pods: 1 Jun 29 14:20:15.896: INFO: Number of nodes with available pods: 0 Jun 29 14:20:15.896: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 29 14:20:15.908: INFO: Number of nodes with available pods: 0 Jun 29 14:20:15.908: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:16.912: INFO: Number of nodes with available pods: 0 Jun 29 14:20:16.912: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:17.913: INFO: Number of nodes with available pods: 0 Jun 29 14:20:17.913: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:18.911: INFO: Number of nodes with available pods: 0 Jun 29 14:20:18.912: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:19.912: INFO: Number of nodes with available pods: 0 Jun 29 14:20:19.912: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:20.912: INFO: Number of nodes with available pods: 0 Jun 29 14:20:20.912: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:21.912: INFO: Number of nodes with available pods: 0 Jun 29 14:20:21.912: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:22.964: INFO: Number of nodes with available pods: 0 Jun 29 14:20:22.964: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:23.914: INFO: Number of nodes with available pods: 0 Jun 29 14:20:23.914: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:20:24.912: INFO: Number of nodes with available pods: 1 Jun 29 14:20:24.912: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1908, will wait for the garbage collector to delete the pods Jun 29 14:20:24.976: INFO: Deleting DaemonSet.extensions daemon-set took: 6.298693ms Jun 29 14:20:25.276: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288043ms Jun 29 14:20:32.203: INFO: Number of nodes with available pods: 0 Jun 29 14:20:32.203: INFO: Number of running nodes: 0, number of available pods: 0 Jun 29 14:20:32.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1908/daemonsets","resourceVersion":"19120166"},"items":null} Jun 29 14:20:32.209: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1908/pods","resourceVersion":"19120166"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:20:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1908" for this suite. Jun 29 14:20:38.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:20:38.347: INFO: namespace daemonsets-1908 deletion completed in 6.096918891s • [SLOW TEST:26.671 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:20:38.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 29 14:20:38.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4740' Jun 29 14:20:38.711: INFO: stderr: "" Jun 29 14:20:38.711: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 29 14:20:39.715: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:39.715: INFO: Found 0 / 1 Jun 29 14:20:40.750: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:40.750: INFO: Found 0 / 1 Jun 29 14:20:41.716: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:41.716: INFO: Found 0 / 1 Jun 29 14:20:42.715: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:42.715: INFO: Found 0 / 1 Jun 29 14:20:43.715: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:43.715: INFO: Found 1 / 1 Jun 29 14:20:43.715: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 29 14:20:43.718: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:43.718: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 29 14:20:43.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-mv6vj --namespace=kubectl-4740 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 29 14:20:43.824: INFO: stderr: "" Jun 29 14:20:43.824: INFO: stdout: "pod/redis-master-mv6vj patched\n" STEP: checking annotations Jun 29 14:20:43.827: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:20:43.827: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:20:43.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4740" for this suite. Jun 29 14:21:05.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:21:05.919: INFO: namespace kubectl-4740 deletion completed in 22.090220989s • [SLOW TEST:27.572 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:21:05.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-0fa56e59-2d7d-464b-a436-65504c2aba86 in namespace container-probe-5421 Jun 29 14:21:09.991: INFO: Started pod liveness-0fa56e59-2d7d-464b-a436-65504c2aba86 in namespace container-probe-5421 STEP: checking the pod's current state and verifying that restartCount is present Jun 29 14:21:09.995: INFO: Initial restart count of pod liveness-0fa56e59-2d7d-464b-a436-65504c2aba86 is 0 Jun 29 14:21:30.044: INFO: Restart count of pod container-probe-5421/liveness-0fa56e59-2d7d-464b-a436-65504c2aba86 is now 1 (20.049163841s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:21:30.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5421" for this suite. Jun 29 14:21:36.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:21:36.198: INFO: namespace container-probe-5421 deletion completed in 6.124307389s • [SLOW TEST:30.279 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:21:36.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-7803/secret-test-a1859d3a-86df-4def-a661-612675e4977d STEP: Creating a pod to test consume secrets Jun 29 14:21:36.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc" in namespace "secrets-7803" to be "success or failure" Jun 29 14:21:36.332: INFO: Pod "pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.036873ms Jun 29 14:21:38.337: INFO: Pod "pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027936985s Jun 29 14:21:40.342: INFO: Pod "pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033568064s STEP: Saw pod success Jun 29 14:21:40.342: INFO: Pod "pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc" satisfied condition "success or failure" Jun 29 14:21:40.345: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc container env-test: STEP: delete the pod Jun 29 14:21:40.361: INFO: Waiting for pod pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc to disappear Jun 29 14:21:40.366: INFO: Pod pod-configmaps-4ca5888d-9576-4b1d-8fcd-509bcc48e0cc no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:21:40.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7803" for this suite. Jun 29 14:21:46.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:21:46.498: INFO: namespace secrets-7803 deletion completed in 6.128717156s • [SLOW TEST:10.299 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:21:46.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9611 I0629 14:21:46.576490 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9611, replica count: 1 I0629 14:21:47.626948 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:21:48.627135 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:21:49.627337 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:21:50.627555 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 29 14:21:50.759: INFO: Created: latency-svc-5cd56 Jun 29 14:21:50.777: INFO: Got endpoints: latency-svc-5cd56 [50.032704ms] Jun 29 14:21:50.846: INFO: Created: latency-svc-zfr49 Jun 29 14:21:50.852: INFO: Got endpoints: latency-svc-zfr49 [74.457316ms] Jun 29 14:21:50.908: INFO: Created: latency-svc-67wnt Jun 29 14:21:50.989: INFO: Got endpoints: latency-svc-67wnt [211.89381ms] Jun 29 14:21:51.010: INFO: Created: latency-svc-m7f84 Jun 29 14:21:51.020: INFO: Got endpoints: latency-svc-m7f84 [242.425885ms] Jun 29 14:21:51.040: INFO: Created: latency-svc-5pv6t Jun 29 14:21:51.057: INFO: Got endpoints: latency-svc-5pv6t [279.121055ms] Jun 29 14:21:51.077: INFO: Created: latency-svc-shs42 Jun 29 14:21:51.127: INFO: Got endpoints: latency-svc-shs42 [349.040139ms] Jun 29 14:21:51.155: INFO: Created: latency-svc-drl9l Jun 29 14:21:51.169: INFO: Got endpoints: latency-svc-drl9l [391.198418ms] Jun 29 14:21:51.203: INFO: Created: latency-svc-gp7qd Jun 29 14:21:51.226: INFO: Got endpoints: latency-svc-gp7qd [447.921189ms] Jun 29 14:21:51.283: INFO: Created: latency-svc-c9mgf Jun 29 14:21:51.303: INFO: Got endpoints: latency-svc-c9mgf [525.60218ms] Jun 29 14:21:51.335: INFO: Created: latency-svc-6gh66 Jun 29 14:21:51.349: INFO: Got endpoints: latency-svc-6gh66 [571.649321ms] Jun 29 14:21:51.371: INFO: Created: latency-svc-k7qcr Jun 29 14:21:51.426: INFO: Got endpoints: latency-svc-k7qcr [648.496694ms] Jun 29 14:21:51.460: INFO: Created: latency-svc-ntxz8 Jun 29 14:21:51.514: INFO: Got endpoints: latency-svc-ntxz8 [735.908402ms] Jun 29 14:21:51.582: INFO: Created: latency-svc-bl5jx Jun 29 14:21:51.596: INFO: Got endpoints: latency-svc-bl5jx [818.026966ms] Jun 29 14:21:51.646: INFO: Created: latency-svc-n9sdf Jun 29 14:21:51.662: INFO: Got endpoints: latency-svc-n9sdf [884.62243ms] Jun 29 14:21:51.682: INFO: Created: latency-svc-hwcd4 Jun 29 14:21:51.755: INFO: Created: latency-svc-l86kh Jun 29 14:21:51.755: INFO: Got endpoints: latency-svc-hwcd4 [977.206481ms] Jun 29 14:21:51.766: INFO: Got endpoints: latency-svc-l86kh [988.613115ms] Jun 29 14:21:51.791: INFO: Created: latency-svc-75nb7 Jun 29 14:21:51.809: INFO: Got endpoints: latency-svc-75nb7 [956.973035ms] Jun 29 14:21:51.831: INFO: Created: latency-svc-pbjq7 Jun 29 14:21:51.899: INFO: Got endpoints: latency-svc-pbjq7 [909.631824ms] Jun 29 14:21:51.922: INFO: Created: latency-svc-ttlfx Jun 29 14:21:51.935: INFO: Got endpoints: latency-svc-ttlfx [915.107251ms] Jun 29 14:21:51.965: INFO: Created: latency-svc-9pbnb Jun 29 14:21:51.978: INFO: Got endpoints: latency-svc-9pbnb [921.26355ms] Jun 29 14:21:51.996: INFO: Created: latency-svc-5x4hp Jun 29 14:21:52.043: INFO: Got endpoints: latency-svc-5x4hp [915.74026ms] Jun 29 14:21:52.049: INFO: Created: latency-svc-rh8gt Jun 29 14:21:52.056: INFO: Got endpoints: latency-svc-rh8gt [887.30594ms] Jun 29 14:21:52.090: INFO: Created: latency-svc-z2p6f Jun 29 14:21:52.093: INFO: Got endpoints: latency-svc-z2p6f [867.514831ms] Jun 29 14:21:52.163: INFO: Created: latency-svc-vzcb6 Jun 29 14:21:52.177: INFO: Got endpoints: latency-svc-vzcb6 [873.954957ms] Jun 29 14:21:52.199: INFO: Created: latency-svc-v4w4x Jun 29 14:21:52.207: INFO: Got endpoints: latency-svc-v4w4x [857.614599ms] Jun 29 14:21:52.229: INFO: Created: latency-svc-hnr5d Jun 29 14:21:52.238: INFO: Got endpoints: latency-svc-hnr5d [811.211163ms] Jun 29 14:21:52.301: INFO: Created: latency-svc-lhj6n Jun 29 14:21:52.305: INFO: Got endpoints: latency-svc-lhj6n [791.659301ms] Jun 29 14:21:52.354: INFO: Created: latency-svc-ssdfh Jun 29 14:21:52.371: INFO: Got endpoints: latency-svc-ssdfh [775.394857ms] Jun 29 14:21:52.444: INFO: Created: latency-svc-vtvbf Jun 29 14:21:52.448: INFO: Got endpoints: latency-svc-vtvbf [785.576911ms] Jun 29 14:21:52.481: INFO: Created: latency-svc-vhn6v Jun 29 14:21:52.497: INFO: Got endpoints: latency-svc-vhn6v [741.75248ms] Jun 29 14:21:52.516: INFO: Created: latency-svc-w25kt Jun 29 14:21:52.527: INFO: Got endpoints: latency-svc-w25kt [760.360369ms] Jun 29 14:21:52.582: INFO: Created: latency-svc-7wrc4 Jun 29 14:21:52.587: INFO: Got endpoints: latency-svc-7wrc4 [777.922356ms] Jun 29 14:21:52.642: INFO: Created: latency-svc-g5xvn Jun 29 14:21:52.654: INFO: Got endpoints: latency-svc-g5xvn [754.326164ms] Jun 29 14:21:52.726: INFO: Created: latency-svc-gccjf Jun 29 14:21:52.729: INFO: Got endpoints: latency-svc-gccjf [793.817666ms] Jun 29 14:21:52.781: INFO: Created: latency-svc-8x72r Jun 29 14:21:52.809: INFO: Got endpoints: latency-svc-8x72r [831.129021ms] Jun 29 14:21:52.870: INFO: Created: latency-svc-wwdzw Jun 29 14:21:52.874: INFO: Got endpoints: latency-svc-wwdzw [830.819111ms] Jun 29 14:21:52.925: INFO: Created: latency-svc-gjssg Jun 29 14:21:52.937: INFO: Got endpoints: latency-svc-gjssg [880.23221ms] Jun 29 14:21:52.960: INFO: Created: latency-svc-6vtxr Jun 29 14:21:53.007: INFO: Got endpoints: latency-svc-6vtxr [913.367127ms] Jun 29 14:21:53.037: INFO: Created: latency-svc-r6g2f Jun 29 14:21:53.051: INFO: Got endpoints: latency-svc-r6g2f [873.803252ms] Jun 29 14:21:53.098: INFO: Created: latency-svc-qpbfm Jun 29 14:21:53.106: INFO: Got endpoints: latency-svc-qpbfm [898.544839ms] Jun 29 14:21:53.145: INFO: Created: latency-svc-kxzfq Jun 29 14:21:53.154: INFO: Got endpoints: latency-svc-kxzfq [916.437152ms] Jun 29 14:21:53.177: INFO: Created: latency-svc-f2nrh Jun 29 14:21:53.190: INFO: Got endpoints: latency-svc-f2nrh [884.480375ms] Jun 29 14:21:53.220: INFO: Created: latency-svc-4hzxw Jun 29 14:21:53.327: INFO: Got endpoints: latency-svc-4hzxw [955.479082ms] Jun 29 14:21:53.363: INFO: Created: latency-svc-47j74 Jun 29 14:21:53.438: INFO: Got endpoints: latency-svc-47j74 [990.039982ms] Jun 29 14:21:53.469: INFO: Created: latency-svc-5ws5j Jun 29 14:21:53.487: INFO: Got endpoints: latency-svc-5ws5j [990.126152ms] Jun 29 14:21:53.531: INFO: Created: latency-svc-p82zt Jun 29 14:21:53.535: INFO: Got endpoints: latency-svc-p82zt [1.008502424s] Jun 29 14:21:53.591: INFO: Created: latency-svc-wrbrd Jun 29 14:21:53.607: INFO: Got endpoints: latency-svc-wrbrd [1.019936777s] Jun 29 14:21:53.656: INFO: Created: latency-svc-z67ht Jun 29 14:21:53.674: INFO: Got endpoints: latency-svc-z67ht [1.020498496s] Jun 29 14:21:53.732: INFO: Created: latency-svc-nnr6x Jun 29 14:21:53.758: INFO: Got endpoints: latency-svc-nnr6x [1.028872745s] Jun 29 14:21:53.759: INFO: Created: latency-svc-g982t Jun 29 14:21:53.776: INFO: Got endpoints: latency-svc-g982t [966.904775ms] Jun 29 14:21:53.795: INFO: Created: latency-svc-z9pp8 Jun 29 14:21:53.830: INFO: Got endpoints: latency-svc-z9pp8 [956.493428ms] Jun 29 14:21:53.906: INFO: Created: latency-svc-9jwmj Jun 29 14:21:53.915: INFO: Got endpoints: latency-svc-9jwmj [977.972104ms] Jun 29 14:21:53.944: INFO: Created: latency-svc-s6qhq Jun 29 14:21:53.957: INFO: Got endpoints: latency-svc-s6qhq [950.811755ms] Jun 29 14:21:53.981: INFO: Created: latency-svc-5dttx Jun 29 14:21:53.994: INFO: Got endpoints: latency-svc-5dttx [942.536631ms] Jun 29 14:21:54.062: INFO: Created: latency-svc-xsrg7 Jun 29 14:21:54.093: INFO: Got endpoints: latency-svc-xsrg7 [987.177103ms] Jun 29 14:21:54.094: INFO: Created: latency-svc-nrvhb Jun 29 14:21:54.108: INFO: Got endpoints: latency-svc-nrvhb [954.080538ms] Jun 29 14:21:54.147: INFO: Created: latency-svc-5phnd Jun 29 14:21:54.156: INFO: Got endpoints: latency-svc-5phnd [966.096328ms] Jun 29 14:21:54.211: INFO: Created: latency-svc-nlws2 Jun 29 14:21:54.216: INFO: Got endpoints: latency-svc-nlws2 [889.443426ms] Jun 29 14:21:54.238: INFO: Created: latency-svc-dc6r9 Jun 29 14:21:54.265: INFO: Got endpoints: latency-svc-dc6r9 [826.92267ms] Jun 29 14:21:54.294: INFO: Created: latency-svc-ghnjl Jun 29 14:21:54.307: INFO: Got endpoints: latency-svc-ghnjl [820.342113ms] Jun 29 14:21:54.361: INFO: Created: latency-svc-dbghs Jun 29 14:21:54.367: INFO: Got endpoints: latency-svc-dbghs [831.939421ms] Jun 29 14:21:54.387: INFO: Created: latency-svc-qdtl9 Jun 29 14:21:54.405: INFO: Got endpoints: latency-svc-qdtl9 [797.68171ms] Jun 29 14:21:54.431: INFO: Created: latency-svc-7lplq Jun 29 14:21:54.460: INFO: Got endpoints: latency-svc-7lplq [786.013844ms] Jun 29 14:21:54.522: INFO: Created: latency-svc-69fbk Jun 29 14:21:54.532: INFO: Got endpoints: latency-svc-69fbk [773.650711ms] Jun 29 14:21:54.549: INFO: Created: latency-svc-rxclf Jun 29 14:21:54.561: INFO: Got endpoints: latency-svc-rxclf [784.631643ms] Jun 29 14:21:54.597: INFO: Created: latency-svc-bssk7 Jun 29 14:21:54.610: INFO: Got endpoints: latency-svc-bssk7 [779.387412ms] Jun 29 14:21:54.654: INFO: Created: latency-svc-6n9v7 Jun 29 14:21:54.657: INFO: Got endpoints: latency-svc-6n9v7 [742.060755ms] Jun 29 14:21:54.683: INFO: Created: latency-svc-qm2xk Jun 29 14:21:54.700: INFO: Got endpoints: latency-svc-qm2xk [742.569749ms] Jun 29 14:21:54.719: INFO: Created: latency-svc-twc6s Jun 29 14:21:54.730: INFO: Got endpoints: latency-svc-twc6s [736.089734ms] Jun 29 14:21:54.747: INFO: Created: latency-svc-r6wz2 Jun 29 14:21:54.792: INFO: Got endpoints: latency-svc-r6wz2 [698.559514ms] Jun 29 14:21:54.794: INFO: Created: latency-svc-bpr4c Jun 29 14:21:54.809: INFO: Got endpoints: latency-svc-bpr4c [700.686061ms] Jun 29 14:21:54.831: INFO: Created: latency-svc-2g94r Jun 29 14:21:54.850: INFO: Got endpoints: latency-svc-2g94r [693.85869ms] Jun 29 14:21:54.880: INFO: Created: latency-svc-69x4s Jun 29 14:21:54.953: INFO: Got endpoints: latency-svc-69x4s [737.114709ms] Jun 29 14:21:54.956: INFO: Created: latency-svc-l9rbv Jun 29 14:21:54.966: INFO: Got endpoints: latency-svc-l9rbv [700.762578ms] Jun 29 14:21:54.999: INFO: Created: latency-svc-p6sr5 Jun 29 14:21:55.015: INFO: Got endpoints: latency-svc-p6sr5 [707.232651ms] Jun 29 14:21:55.047: INFO: Created: latency-svc-jfn2h Jun 29 14:21:55.091: INFO: Got endpoints: latency-svc-jfn2h [723.25199ms] Jun 29 14:21:55.114: INFO: Created: latency-svc-x5tm8 Jun 29 14:21:55.129: INFO: Got endpoints: latency-svc-x5tm8 [723.654845ms] Jun 29 14:21:55.151: INFO: Created: latency-svc-dphpn Jun 29 14:21:55.165: INFO: Got endpoints: latency-svc-dphpn [705.270903ms] Jun 29 14:21:55.187: INFO: Created: latency-svc-4tkld Jun 29 14:21:55.228: INFO: Got endpoints: latency-svc-4tkld [696.582647ms] Jun 29 14:21:55.239: INFO: Created: latency-svc-k5kq5 Jun 29 14:21:55.256: INFO: Got endpoints: latency-svc-k5kq5 [694.88472ms] Jun 29 14:21:55.293: INFO: Created: latency-svc-96txc Jun 29 14:21:55.322: INFO: Got endpoints: latency-svc-96txc [712.874324ms] Jun 29 14:21:55.372: INFO: Created: latency-svc-2p55j Jun 29 14:21:55.388: INFO: Got endpoints: latency-svc-2p55j [731.331209ms] Jun 29 14:21:55.408: INFO: Created: latency-svc-gxdqs Jun 29 14:21:55.438: INFO: Got endpoints: latency-svc-gxdqs [738.041046ms] Jun 29 14:21:55.504: INFO: Created: latency-svc-922fm Jun 29 14:21:55.508: INFO: Got endpoints: latency-svc-922fm [777.581247ms] Jun 29 14:21:55.533: INFO: Created: latency-svc-7cbx9 Jun 29 14:21:55.545: INFO: Got endpoints: latency-svc-7cbx9 [753.642859ms] Jun 29 14:21:55.582: INFO: Created: latency-svc-wnzk2 Jun 29 14:21:55.593: INFO: Got endpoints: latency-svc-wnzk2 [784.502999ms] Jun 29 14:21:55.678: INFO: Created: latency-svc-5b6b2 Jun 29 14:21:55.691: INFO: Got endpoints: latency-svc-5b6b2 [840.479199ms] Jun 29 14:21:55.725: INFO: Created: latency-svc-v6p55 Jun 29 14:21:55.738: INFO: Got endpoints: latency-svc-v6p55 [784.551307ms] Jun 29 14:21:55.761: INFO: Created: latency-svc-gl2zt Jun 29 14:21:55.810: INFO: Got endpoints: latency-svc-gl2zt [843.519959ms] Jun 29 14:21:55.827: INFO: Created: latency-svc-rlqmp Jun 29 14:21:55.840: INFO: Got endpoints: latency-svc-rlqmp [825.610719ms] Jun 29 14:21:55.864: INFO: Created: latency-svc-5vjdd Jun 29 14:21:55.888: INFO: Got endpoints: latency-svc-5vjdd [797.094232ms] Jun 29 14:21:55.954: INFO: Created: latency-svc-tjwcl Jun 29 14:21:55.989: INFO: Got endpoints: latency-svc-tjwcl [860.470516ms] Jun 29 14:21:55.989: INFO: Created: latency-svc-xbjlx Jun 29 14:21:56.025: INFO: Got endpoints: latency-svc-xbjlx [859.634213ms] Jun 29 14:21:56.097: INFO: Created: latency-svc-fkn6v Jun 29 14:21:56.100: INFO: Got endpoints: latency-svc-fkn6v [871.89414ms] Jun 29 14:21:56.128: INFO: Created: latency-svc-876jn Jun 29 14:21:56.142: INFO: Got endpoints: latency-svc-876jn [886.457028ms] Jun 29 14:21:56.176: INFO: Created: latency-svc-chphp Jun 29 14:21:56.190: INFO: Got endpoints: latency-svc-chphp [867.894981ms] Jun 29 14:21:56.241: INFO: Created: latency-svc-8tjtc Jun 29 14:21:56.243: INFO: Got endpoints: latency-svc-8tjtc [854.642418ms] Jun 29 14:21:56.302: INFO: Created: latency-svc-4f4tt Jun 29 14:21:56.323: INFO: Got endpoints: latency-svc-4f4tt [884.835455ms] Jun 29 14:21:56.402: INFO: Created: latency-svc-zn7zh Jun 29 14:21:56.407: INFO: Got endpoints: latency-svc-zn7zh [899.178877ms] Jun 29 14:21:56.439: INFO: Created: latency-svc-8bgng Jun 29 14:21:56.456: INFO: Got endpoints: latency-svc-8bgng [910.501177ms] Jun 29 14:21:56.474: INFO: Created: latency-svc-p7jpb Jun 29 14:21:56.486: INFO: Got endpoints: latency-svc-p7jpb [892.390781ms] Jun 29 14:21:56.552: INFO: Created: latency-svc-f6xrs Jun 29 14:21:56.571: INFO: Got endpoints: latency-svc-f6xrs [880.729862ms] Jun 29 14:21:56.602: INFO: Created: latency-svc-jws4s Jun 29 14:21:56.618: INFO: Got endpoints: latency-svc-jws4s [880.091135ms] Jun 29 14:21:56.639: INFO: Created: latency-svc-6ww8c Jun 29 14:21:56.690: INFO: Got endpoints: latency-svc-6ww8c [880.126671ms] Jun 29 14:21:56.702: INFO: Created: latency-svc-mdpxx Jun 29 14:21:56.715: INFO: Got endpoints: latency-svc-mdpxx [874.440296ms] Jun 29 14:21:56.739: INFO: Created: latency-svc-v9z6c Jun 29 14:21:56.751: INFO: Got endpoints: latency-svc-v9z6c [863.305395ms] Jun 29 14:21:56.774: INFO: Created: latency-svc-p2glf Jun 29 14:21:56.788: INFO: Got endpoints: latency-svc-p2glf [798.433494ms] Jun 29 14:21:56.840: INFO: Created: latency-svc-nw6pw Jun 29 14:21:56.848: INFO: Got endpoints: latency-svc-nw6pw [822.609786ms] Jun 29 14:21:56.872: INFO: Created: latency-svc-w7tpc Jun 29 14:21:56.884: INFO: Got endpoints: latency-svc-w7tpc [783.898439ms] Jun 29 14:21:56.902: INFO: Created: latency-svc-sk452 Jun 29 14:21:56.930: INFO: Got endpoints: latency-svc-sk452 [787.785091ms] Jun 29 14:21:57.002: INFO: Created: latency-svc-z97vc Jun 29 14:21:57.004: INFO: Got endpoints: latency-svc-z97vc [813.975939ms] Jun 29 14:21:57.046: INFO: Created: latency-svc-8z55s Jun 29 14:21:57.060: INFO: Got endpoints: latency-svc-8z55s [817.309526ms] Jun 29 14:21:57.081: INFO: Created: latency-svc-jh6wr Jun 29 14:21:57.145: INFO: Got endpoints: latency-svc-jh6wr [822.165881ms] Jun 29 14:21:57.159: INFO: Created: latency-svc-8bbhp Jun 29 14:21:57.174: INFO: Got endpoints: latency-svc-8bbhp [766.484738ms] Jun 29 14:21:57.206: INFO: Created: latency-svc-w472x Jun 29 14:21:57.223: INFO: Got endpoints: latency-svc-w472x [766.677037ms] Jun 29 14:21:57.272: INFO: Created: latency-svc-r5wqq Jun 29 14:21:57.296: INFO: Got endpoints: latency-svc-r5wqq [809.947065ms] Jun 29 14:21:57.316: INFO: Created: latency-svc-zzhgb Jun 29 14:21:57.331: INFO: Got endpoints: latency-svc-zzhgb [759.168855ms] Jun 29 14:21:57.358: INFO: Created: latency-svc-4fvrx Jun 29 14:21:57.408: INFO: Got endpoints: latency-svc-4fvrx [790.095975ms] Jun 29 14:21:57.424: INFO: Created: latency-svc-r6t9v Jun 29 14:21:57.445: INFO: Got endpoints: latency-svc-r6t9v [755.141642ms] Jun 29 14:21:57.464: INFO: Created: latency-svc-znxrm Jun 29 14:21:57.488: INFO: Got endpoints: latency-svc-znxrm [773.15878ms] Jun 29 14:21:57.542: INFO: Created: latency-svc-nd2r7 Jun 29 14:21:57.572: INFO: Got endpoints: latency-svc-nd2r7 [820.805384ms] Jun 29 14:21:57.622: INFO: Created: latency-svc-45mhb Jun 29 14:21:57.695: INFO: Got endpoints: latency-svc-45mhb [907.793004ms] Jun 29 14:21:57.711: INFO: Created: latency-svc-cxrvl Jun 29 14:21:57.716: INFO: Got endpoints: latency-svc-cxrvl [868.117833ms] Jun 29 14:21:57.747: INFO: Created: latency-svc-7nn6l Jun 29 14:21:57.759: INFO: Got endpoints: latency-svc-7nn6l [874.147464ms] Jun 29 14:21:57.859: INFO: Created: latency-svc-4gtqm Jun 29 14:21:57.860: INFO: Got endpoints: latency-svc-4gtqm [930.037828ms] Jun 29 14:21:57.892: INFO: Created: latency-svc-7ssrq Jun 29 14:21:57.903: INFO: Got endpoints: latency-svc-7ssrq [898.750519ms] Jun 29 14:21:57.920: INFO: Created: latency-svc-mkg9n Jun 29 14:21:57.944: INFO: Got endpoints: latency-svc-mkg9n [883.443072ms] Jun 29 14:21:58.002: INFO: Created: latency-svc-xdmc2 Jun 29 14:21:58.018: INFO: Got endpoints: latency-svc-xdmc2 [872.665577ms] Jun 29 14:21:58.066: INFO: Created: latency-svc-v5t5z Jun 29 14:21:58.078: INFO: Got endpoints: latency-svc-v5t5z [904.627521ms] Jun 29 14:21:58.163: INFO: Created: latency-svc-v89jk Jun 29 14:21:58.166: INFO: Got endpoints: latency-svc-v89jk [943.679835ms] Jun 29 14:21:58.196: INFO: Created: latency-svc-zlx5g Jun 29 14:21:58.211: INFO: Got endpoints: latency-svc-zlx5g [914.906881ms] Jun 29 14:21:58.232: INFO: Created: latency-svc-txtdz Jun 29 14:21:58.241: INFO: Got endpoints: latency-svc-txtdz [910.249353ms] Jun 29 14:21:58.301: INFO: Created: latency-svc-5t56w Jun 29 14:21:58.303: INFO: Got endpoints: latency-svc-5t56w [894.390488ms] Jun 29 14:21:58.335: INFO: Created: latency-svc-bchrw Jun 29 14:21:58.350: INFO: Got endpoints: latency-svc-bchrw [904.735172ms] Jun 29 14:21:58.370: INFO: Created: latency-svc-qmbdl Jun 29 14:21:58.386: INFO: Got endpoints: latency-svc-qmbdl [897.963583ms] Jun 29 14:21:58.444: INFO: Created: latency-svc-25sg8 Jun 29 14:21:58.447: INFO: Got endpoints: latency-svc-25sg8 [874.705179ms] Jun 29 14:21:58.481: INFO: Created: latency-svc-4mlpp Jun 29 14:21:58.488: INFO: Got endpoints: latency-svc-4mlpp [792.9539ms] Jun 29 14:21:58.509: INFO: Created: latency-svc-rddjq Jun 29 14:21:58.606: INFO: Got endpoints: latency-svc-rddjq [889.94693ms] Jun 29 14:21:58.616: INFO: Created: latency-svc-vsrst Jun 29 14:21:58.646: INFO: Got endpoints: latency-svc-vsrst [887.314405ms] Jun 29 14:21:58.688: INFO: Created: latency-svc-tw7j5 Jun 29 14:21:58.706: INFO: Got endpoints: latency-svc-tw7j5 [845.35373ms] Jun 29 14:21:58.750: INFO: Created: latency-svc-bjp9s Jun 29 14:21:58.760: INFO: Got endpoints: latency-svc-bjp9s [856.261187ms] Jun 29 14:21:58.780: INFO: Created: latency-svc-9rctt Jun 29 14:21:58.798: INFO: Got endpoints: latency-svc-9rctt [854.337177ms] Jun 29 14:21:58.821: INFO: Created: latency-svc-wxmmj Jun 29 14:21:58.838: INFO: Got endpoints: latency-svc-wxmmj [820.315591ms] Jun 29 14:21:58.888: INFO: Created: latency-svc-8tc66 Jun 29 14:21:58.892: INFO: Got endpoints: latency-svc-8tc66 [814.055588ms] Jun 29 14:21:58.922: INFO: Created: latency-svc-6wpcn Jun 29 14:21:58.935: INFO: Got endpoints: latency-svc-6wpcn [768.343029ms] Jun 29 14:21:58.960: INFO: Created: latency-svc-h2shg Jun 29 14:21:58.971: INFO: Got endpoints: latency-svc-h2shg [760.096553ms] Jun 29 14:21:59.019: INFO: Created: latency-svc-nct9v Jun 29 14:21:59.049: INFO: Got endpoints: latency-svc-nct9v [808.023171ms] Jun 29 14:21:59.084: INFO: Created: latency-svc-qq8pq Jun 29 14:21:59.163: INFO: Got endpoints: latency-svc-qq8pq [859.785239ms] Jun 29 14:21:59.168: INFO: Created: latency-svc-kbtnq Jun 29 14:21:59.182: INFO: Got endpoints: latency-svc-kbtnq [832.45629ms] Jun 29 14:21:59.205: INFO: Created: latency-svc-wccbn Jun 29 14:21:59.218: INFO: Got endpoints: latency-svc-wccbn [832.136003ms] Jun 29 14:21:59.254: INFO: Created: latency-svc-77sxh Jun 29 14:21:59.291: INFO: Got endpoints: latency-svc-77sxh [843.702959ms] Jun 29 14:21:59.348: INFO: Created: latency-svc-hmf8z Jun 29 14:21:59.363: INFO: Got endpoints: latency-svc-hmf8z [874.487668ms] Jun 29 14:21:59.432: INFO: Created: latency-svc-4sq4k Jun 29 14:21:59.436: INFO: Got endpoints: latency-svc-4sq4k [829.772216ms] Jun 29 14:21:59.468: INFO: Created: latency-svc-z9ztv Jun 29 14:21:59.483: INFO: Got endpoints: latency-svc-z9ztv [837.492586ms] Jun 29 14:21:59.505: INFO: Created: latency-svc-qvbz6 Jun 29 14:21:59.520: INFO: Got endpoints: latency-svc-qvbz6 [814.377395ms] Jun 29 14:21:59.582: INFO: Created: latency-svc-56n97 Jun 29 14:21:59.592: INFO: Got endpoints: latency-svc-56n97 [832.866015ms] Jun 29 14:21:59.642: INFO: Created: latency-svc-pltsd Jun 29 14:21:59.708: INFO: Got endpoints: latency-svc-pltsd [909.269123ms] Jun 29 14:21:59.738: INFO: Created: latency-svc-ljw9q Jun 29 14:21:59.755: INFO: Got endpoints: latency-svc-ljw9q [916.312809ms] Jun 29 14:21:59.775: INFO: Created: latency-svc-lj7n9 Jun 29 14:21:59.791: INFO: Got endpoints: latency-svc-lj7n9 [898.579081ms] Jun 29 14:21:59.864: INFO: Created: latency-svc-nhjx8 Jun 29 14:21:59.869: INFO: Got endpoints: latency-svc-nhjx8 [934.551791ms] Jun 29 14:21:59.887: INFO: Created: latency-svc-46dzf Jun 29 14:21:59.900: INFO: Got endpoints: latency-svc-46dzf [928.757881ms] Jun 29 14:21:59.936: INFO: Created: latency-svc-5n4ww Jun 29 14:21:59.948: INFO: Got endpoints: latency-svc-5n4ww [898.851046ms] Jun 29 14:22:00.001: INFO: Created: latency-svc-wbvmg Jun 29 14:22:00.008: INFO: Got endpoints: latency-svc-wbvmg [845.630283ms] Jun 29 14:22:00.027: INFO: Created: latency-svc-rdtc9 Jun 29 14:22:00.032: INFO: Got endpoints: latency-svc-rdtc9 [849.607092ms] Jun 29 14:22:00.067: INFO: Created: latency-svc-cll5h Jun 29 14:22:00.099: INFO: Got endpoints: latency-svc-cll5h [880.30907ms] Jun 29 14:22:00.175: INFO: Created: latency-svc-ndkgr Jun 29 14:22:00.207: INFO: Got endpoints: latency-svc-ndkgr [916.171374ms] Jun 29 14:22:00.230: INFO: Created: latency-svc-46tdg Jun 29 14:22:00.253: INFO: Got endpoints: latency-svc-46tdg [890.366665ms] Jun 29 14:22:00.355: INFO: Created: latency-svc-npblh Jun 29 14:22:00.357: INFO: Got endpoints: latency-svc-npblh [921.490369ms] Jun 29 14:22:00.386: INFO: Created: latency-svc-ph52h Jun 29 14:22:00.394: INFO: Got endpoints: latency-svc-ph52h [910.197355ms] Jun 29 14:22:00.416: INFO: Created: latency-svc-mdhkd Jun 29 14:22:00.434: INFO: Got endpoints: latency-svc-mdhkd [914.088262ms] Jun 29 14:22:00.505: INFO: Created: latency-svc-tpbf4 Jun 29 14:22:00.507: INFO: Got endpoints: latency-svc-tpbf4 [914.918018ms] Jun 29 14:22:00.559: INFO: Created: latency-svc-r9vpf Jun 29 14:22:00.577: INFO: Got endpoints: latency-svc-r9vpf [869.59628ms] Jun 29 14:22:00.642: INFO: Created: latency-svc-bcrl2 Jun 29 14:22:00.645: INFO: Got endpoints: latency-svc-bcrl2 [890.680449ms] Jun 29 14:22:00.675: INFO: Created: latency-svc-hctfg Jun 29 14:22:00.689: INFO: Got endpoints: latency-svc-hctfg [898.188574ms] Jun 29 14:22:00.722: INFO: Created: latency-svc-2hbg5 Jun 29 14:22:00.740: INFO: Got endpoints: latency-svc-2hbg5 [870.430111ms] Jun 29 14:22:00.786: INFO: Created: latency-svc-dgwqt Jun 29 14:22:00.792: INFO: Got endpoints: latency-svc-dgwqt [891.890654ms] Jun 29 14:22:00.811: INFO: Created: latency-svc-zzf6t Jun 29 14:22:00.822: INFO: Got endpoints: latency-svc-zzf6t [873.918106ms] Jun 29 14:22:00.841: INFO: Created: latency-svc-q2ljn Jun 29 14:22:00.853: INFO: Got endpoints: latency-svc-q2ljn [844.442376ms] Jun 29 14:22:00.871: INFO: Created: latency-svc-lv77s Jun 29 14:22:00.883: INFO: Got endpoints: latency-svc-lv77s [850.865294ms] Jun 29 14:22:00.935: INFO: Created: latency-svc-tfpkg Jun 29 14:22:00.943: INFO: Got endpoints: latency-svc-tfpkg [844.253246ms] Jun 29 14:22:00.963: INFO: Created: latency-svc-9skf5 Jun 29 14:22:00.991: INFO: Got endpoints: latency-svc-9skf5 [784.055324ms] Jun 29 14:22:01.027: INFO: Created: latency-svc-j4rzl Jun 29 14:22:01.073: INFO: Got endpoints: latency-svc-j4rzl [819.407101ms] Jun 29 14:22:01.094: INFO: Created: latency-svc-bfgdw Jun 29 14:22:01.106: INFO: Got endpoints: latency-svc-bfgdw [748.707831ms] Jun 29 14:22:01.125: INFO: Created: latency-svc-h646z Jun 29 14:22:01.137: INFO: Got endpoints: latency-svc-h646z [742.731914ms] Jun 29 14:22:01.166: INFO: Created: latency-svc-gv87j Jun 29 14:22:01.204: INFO: Got endpoints: latency-svc-gv87j [770.14483ms] Jun 29 14:22:01.219: INFO: Created: latency-svc-b6q7h Jun 29 14:22:01.233: INFO: Got endpoints: latency-svc-b6q7h [725.682729ms] Jun 29 14:22:01.262: INFO: Created: latency-svc-c4lrc Jun 29 14:22:01.282: INFO: Got endpoints: latency-svc-c4lrc [704.737282ms] Jun 29 14:22:01.337: INFO: Created: latency-svc-wzfg9 Jun 29 14:22:01.354: INFO: Got endpoints: latency-svc-wzfg9 [708.078368ms] Jun 29 14:22:01.388: INFO: Created: latency-svc-7sndm Jun 29 14:22:01.414: INFO: Got endpoints: latency-svc-7sndm [724.921757ms] Jun 29 14:22:01.436: INFO: Created: latency-svc-6qwkb Jun 29 14:22:01.474: INFO: Got endpoints: latency-svc-6qwkb [734.121988ms] Jun 29 14:22:01.489: INFO: Created: latency-svc-wxgdv Jun 29 14:22:01.511: INFO: Got endpoints: latency-svc-wxgdv [718.85496ms] Jun 29 14:22:01.531: INFO: Created: latency-svc-4tjx6 Jun 29 14:22:01.561: INFO: Got endpoints: latency-svc-4tjx6 [739.141519ms] Jun 29 14:22:01.630: INFO: Created: latency-svc-bwx4f Jun 29 14:22:01.639: INFO: Got endpoints: latency-svc-bwx4f [785.868717ms] Jun 29 14:22:01.665: INFO: Created: latency-svc-pwhtf Jun 29 14:22:01.681: INFO: Got endpoints: latency-svc-pwhtf [798.27078ms] Jun 29 14:22:01.780: INFO: Created: latency-svc-l8czf Jun 29 14:22:01.783: INFO: Got endpoints: latency-svc-l8czf [839.81595ms] Jun 29 14:22:01.842: INFO: Created: latency-svc-nmwcr Jun 29 14:22:01.855: INFO: Got endpoints: latency-svc-nmwcr [863.808235ms] Jun 29 14:22:01.874: INFO: Created: latency-svc-f5788 Jun 29 14:22:01.911: INFO: Got endpoints: latency-svc-f5788 [838.138487ms] Jun 29 14:22:01.928: INFO: Created: latency-svc-lxcpz Jun 29 14:22:01.958: INFO: Got endpoints: latency-svc-lxcpz [851.490655ms] Jun 29 14:22:01.975: INFO: Created: latency-svc-qt9s8 Jun 29 14:22:01.988: INFO: Got endpoints: latency-svc-qt9s8 [851.418749ms] Jun 29 14:22:02.006: INFO: Created: latency-svc-vkm8v Jun 29 14:22:02.067: INFO: Got endpoints: latency-svc-vkm8v [862.331605ms] Jun 29 14:22:02.072: INFO: Created: latency-svc-zsbwq Jun 29 14:22:02.096: INFO: Got endpoints: latency-svc-zsbwq [863.230693ms] Jun 29 14:22:02.096: INFO: Latencies: [74.457316ms 211.89381ms 242.425885ms 279.121055ms 349.040139ms 391.198418ms 447.921189ms 525.60218ms 571.649321ms 648.496694ms 693.85869ms 694.88472ms 696.582647ms 698.559514ms 700.686061ms 700.762578ms 704.737282ms 705.270903ms 707.232651ms 708.078368ms 712.874324ms 718.85496ms 723.25199ms 723.654845ms 724.921757ms 725.682729ms 731.331209ms 734.121988ms 735.908402ms 736.089734ms 737.114709ms 738.041046ms 739.141519ms 741.75248ms 742.060755ms 742.569749ms 742.731914ms 748.707831ms 753.642859ms 754.326164ms 755.141642ms 759.168855ms 760.096553ms 760.360369ms 766.484738ms 766.677037ms 768.343029ms 770.14483ms 773.15878ms 773.650711ms 775.394857ms 777.581247ms 777.922356ms 779.387412ms 783.898439ms 784.055324ms 784.502999ms 784.551307ms 784.631643ms 785.576911ms 785.868717ms 786.013844ms 787.785091ms 790.095975ms 791.659301ms 792.9539ms 793.817666ms 797.094232ms 797.68171ms 798.27078ms 798.433494ms 808.023171ms 809.947065ms 811.211163ms 813.975939ms 814.055588ms 814.377395ms 817.309526ms 818.026966ms 819.407101ms 820.315591ms 820.342113ms 820.805384ms 822.165881ms 822.609786ms 825.610719ms 826.92267ms 829.772216ms 830.819111ms 831.129021ms 831.939421ms 832.136003ms 832.45629ms 832.866015ms 837.492586ms 838.138487ms 839.81595ms 840.479199ms 843.519959ms 843.702959ms 844.253246ms 844.442376ms 845.35373ms 845.630283ms 849.607092ms 850.865294ms 851.418749ms 851.490655ms 854.337177ms 854.642418ms 856.261187ms 857.614599ms 859.634213ms 859.785239ms 860.470516ms 862.331605ms 863.230693ms 863.305395ms 863.808235ms 867.514831ms 867.894981ms 868.117833ms 869.59628ms 870.430111ms 871.89414ms 872.665577ms 873.803252ms 873.918106ms 873.954957ms 874.147464ms 874.440296ms 874.487668ms 874.705179ms 880.091135ms 880.126671ms 880.23221ms 880.30907ms 880.729862ms 883.443072ms 884.480375ms 884.62243ms 884.835455ms 886.457028ms 887.30594ms 887.314405ms 889.443426ms 889.94693ms 890.366665ms 890.680449ms 891.890654ms 892.390781ms 894.390488ms 897.963583ms 898.188574ms 898.544839ms 898.579081ms 898.750519ms 898.851046ms 899.178877ms 904.627521ms 904.735172ms 907.793004ms 909.269123ms 909.631824ms 910.197355ms 910.249353ms 910.501177ms 913.367127ms 914.088262ms 914.906881ms 914.918018ms 915.107251ms 915.74026ms 916.171374ms 916.312809ms 916.437152ms 921.26355ms 921.490369ms 928.757881ms 930.037828ms 934.551791ms 942.536631ms 943.679835ms 950.811755ms 954.080538ms 955.479082ms 956.493428ms 956.973035ms 966.096328ms 966.904775ms 977.206481ms 977.972104ms 987.177103ms 988.613115ms 990.039982ms 990.126152ms 1.008502424s 1.019936777s 1.020498496s 1.028872745s] Jun 29 14:22:02.097: INFO: 50 %ile: 844.253246ms Jun 29 14:22:02.097: INFO: 90 %ile: 934.551791ms Jun 29 14:22:02.097: INFO: 99 %ile: 1.020498496s Jun 29 14:22:02.097: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:22:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9611" for this suite. Jun 29 14:22:38.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:22:38.218: INFO: namespace svc-latency-9611 deletion completed in 36.115221673s • [SLOW TEST:51.720 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:22:38.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 29 14:22:46.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:46.412: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:48.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:48.416: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:50.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:50.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:52.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:52.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:54.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:54.418: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:56.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:56.416: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:22:58.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:22:58.416: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:00.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:00.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:02.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:02.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:04.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:04.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:06.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:06.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:08.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:08.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:10.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:10.417: INFO: Pod pod-with-prestop-exec-hook still exists Jun 29 14:23:12.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 29 14:23:12.417: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:23:12.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9447" for this suite. Jun 29 14:23:34.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:23:34.524: INFO: namespace container-lifecycle-hook-9447 deletion completed in 22.095846956s • [SLOW TEST:56.306 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:23:34.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 29 14:23:34.595: INFO: Waiting up to 5m0s for pod "client-containers-9870863e-5637-4e1e-b573-7603f026fffd" in namespace "containers-8953" to be "success or failure" Jun 29 14:23:34.599: INFO: Pod "client-containers-9870863e-5637-4e1e-b573-7603f026fffd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244712ms Jun 29 14:23:36.604: INFO: Pod "client-containers-9870863e-5637-4e1e-b573-7603f026fffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008913728s Jun 29 14:23:38.608: INFO: Pod "client-containers-9870863e-5637-4e1e-b573-7603f026fffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013187236s STEP: Saw pod success Jun 29 14:23:38.608: INFO: Pod "client-containers-9870863e-5637-4e1e-b573-7603f026fffd" satisfied condition "success or failure" Jun 29 14:23:38.611: INFO: Trying to get logs from node iruya-worker pod client-containers-9870863e-5637-4e1e-b573-7603f026fffd container test-container: STEP: delete the pod Jun 29 14:23:38.631: INFO: Waiting for pod client-containers-9870863e-5637-4e1e-b573-7603f026fffd to disappear Jun 29 14:23:38.650: INFO: Pod client-containers-9870863e-5637-4e1e-b573-7603f026fffd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:23:38.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8953" for this suite. Jun 29 14:23:44.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:23:44.808: INFO: namespace containers-8953 deletion completed in 6.15442005s • [SLOW TEST:10.283 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:23:44.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-d5dvz in namespace proxy-3487 I0629 14:23:44.928020 6 runners.go:180] Created replication controller with name: proxy-service-d5dvz, namespace: proxy-3487, replica count: 1 I0629 14:23:45.978480 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:23:46.978736 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:23:47.978961 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0629 14:23:48.979238 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:49.979455 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:50.979663 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:51.979863 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:52.980110 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:53.980330 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:54.980554 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:55.980797 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0629 14:23:56.981067 6 runners.go:180] proxy-service-d5dvz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 29 14:23:56.984: INFO: setup took 12.091302663s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 29 14:23:56.991: INFO: (0) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 6.969947ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 7.316361ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 7.588431ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 7.575523ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 7.653122ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 8.075472ms) Jun 29 14:23:56.992: INFO: (0) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 8.016161ms) Jun 29 14:23:56.993: INFO: (0) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 8.494639ms) Jun 29 14:23:56.993: INFO: (0) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 8.423202ms) Jun 29 14:23:56.994: INFO: (0) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 9.294591ms) Jun 29 14:23:56.994: INFO: (0) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 9.3471ms) Jun 29 14:23:56.999: INFO: (0) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 14.66953ms) Jun 29 14:23:56.999: INFO: (0) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 14.773226ms) Jun 29 14:23:56.999: INFO: (0) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 14.781435ms) Jun 29 14:23:57.001: INFO: (0) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 5.065278ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 5.17708ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 5.255546ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 5.303644ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.424048ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.32244ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.60667ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 5.763084ms) Jun 29 14:23:57.007: INFO: (1) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 5.63769ms) Jun 29 14:23:57.008: INFO: (1) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 5.886273ms) Jun 29 14:23:57.008: INFO: (1) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 5.92595ms) Jun 29 14:23:57.008: INFO: (1) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test<... (200; 4.192797ms) Jun 29 14:23:57.012: INFO: (2) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 5.011556ms) Jun 29 14:23:57.013: INFO: (2) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 4.8612ms) Jun 29 14:23:57.013: INFO: (2) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.012729ms) Jun 29 14:23:57.013: INFO: (2) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.498274ms) Jun 29 14:23:57.013: INFO: (2) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.464785ms) Jun 29 14:23:57.013: INFO: (2) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 5.573952ms) Jun 29 14:23:57.014: INFO: (2) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.815013ms) Jun 29 14:23:57.014: INFO: (2) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.34364ms) Jun 29 14:23:57.014: INFO: (2) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 6.397484ms) Jun 29 14:23:57.015: INFO: (2) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 6.551138ms) Jun 29 14:23:57.015: INFO: (2) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.574516ms) Jun 29 14:23:57.025: INFO: (2) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 17.343634ms) Jun 29 14:23:57.025: INFO: (2) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 17.471805ms) Jun 29 14:23:57.029: INFO: (3) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.745592ms) Jun 29 14:23:57.030: INFO: (3) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.157849ms) Jun 29 14:23:57.030: INFO: (3) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.176607ms) Jun 29 14:23:57.030: INFO: (3) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 4.207421ms) Jun 29 14:23:57.030: INFO: (3) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 4.278378ms) Jun 29 14:23:57.031: INFO: (3) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 5.195866ms) Jun 29 14:23:57.031: INFO: (3) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.680159ms) Jun 29 14:23:57.031: INFO: (3) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 5.755496ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 5.99796ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.100097ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.051298ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 6.841718ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 6.908251ms) Jun 29 14:23:57.032: INFO: (3) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 4.395856ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.082826ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 6.217687ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 6.274488ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 6.311961ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test<... (200; 6.10192ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 6.358462ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.490766ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 6.491801ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.552402ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 6.616164ms) Jun 29 14:23:57.039: INFO: (4) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.49098ms) Jun 29 14:23:57.040: INFO: (4) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 6.965542ms) Jun 29 14:23:57.040: INFO: (4) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 6.839268ms) Jun 29 14:23:57.044: INFO: (5) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.11084ms) Jun 29 14:23:57.044: INFO: (5) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 4.512877ms) Jun 29 14:23:57.044: INFO: (5) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 4.714814ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 5.133147ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 5.257215ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 5.2551ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.161487ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.249353ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.320955ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 5.579756ms) Jun 29 14:23:57.045: INFO: (5) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 5.631406ms) Jun 29 14:23:57.046: INFO: (5) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 6.388314ms) Jun 29 14:23:57.046: INFO: (5) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.694408ms) Jun 29 14:23:57.046: INFO: (5) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 6.769923ms) Jun 29 14:23:57.047: INFO: (5) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.754223ms) Jun 29 14:23:57.049: INFO: (6) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 2.316849ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 3.852414ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 4.097546ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.100221ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 4.199168ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 4.380501ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 4.47838ms) Jun 29 14:23:57.051: INFO: (6) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 4.761411ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 4.879879ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.883551ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 5.033846ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 4.964637ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 5.604474ms) Jun 29 14:23:57.052: INFO: (6) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 5.658934ms) Jun 29 14:23:57.055: INFO: (7) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 2.836876ms) Jun 29 14:23:57.055: INFO: (7) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.004545ms) Jun 29 14:23:57.055: INFO: (7) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 2.922824ms) Jun 29 14:23:57.056: INFO: (7) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 3.384177ms) Jun 29 14:23:57.056: INFO: (7) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 3.329794ms) Jun 29 14:23:57.058: INFO: (7) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.636601ms) Jun 29 14:23:57.058: INFO: (7) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.687119ms) Jun 29 14:23:57.058: INFO: (7) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.037642ms) Jun 29 14:23:57.059: INFO: (7) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 4.981021ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.141888ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 5.189044ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 5.198698ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 5.225249ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 5.289947ms) Jun 29 14:23:57.067: INFO: (8) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 5.374341ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 6.348577ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 6.292251ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.322995ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 6.472465ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 6.423574ms) Jun 29 14:23:57.068: INFO: (8) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.578797ms) Jun 29 14:23:57.071: INFO: (9) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 2.666296ms) Jun 29 14:23:57.071: INFO: (9) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 2.857885ms) Jun 29 14:23:57.071: INFO: (9) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 2.853822ms) Jun 29 14:23:57.071: INFO: (9) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 3.757348ms) Jun 29 14:23:57.072: INFO: (9) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 3.897056ms) Jun 29 14:23:57.072: INFO: (9) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.690263ms) Jun 29 14:23:57.073: INFO: (9) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 4.120594ms) Jun 29 14:23:57.073: INFO: (9) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 4.27498ms) Jun 29 14:23:57.073: INFO: (9) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.604987ms) Jun 29 14:23:57.073: INFO: (9) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 4.643547ms) Jun 29 14:23:57.073: INFO: (9) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 4.591584ms) Jun 29 14:23:57.076: INFO: (10) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 2.783961ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.783493ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 3.755398ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 3.754388ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 3.773816ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 3.978912ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 3.946592ms) Jun 29 14:23:57.077: INFO: (10) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 3.993333ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.090362ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 4.053006ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 4.141757ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 4.116186ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 4.125674ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 4.1293ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 4.190232ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 4.220036ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.307587ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 4.486671ms) Jun 29 14:23:57.083: INFO: (11) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 4.452068ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 3.414938ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 3.282748ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 3.37195ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.337727ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 3.352163ms) Jun 29 14:23:57.086: INFO: (12) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 3.371194ms) Jun 29 14:23:57.087: INFO: (12) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test<... (200; 2.290632ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 4.646161ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test (200; 4.614536ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 4.623198ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 4.622573ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 4.665456ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.829187ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 4.854049ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 4.913211ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 4.946994ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 4.903072ms) Jun 29 14:23:57.093: INFO: (13) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 4.986183ms) Jun 29 14:23:57.095: INFO: (14) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 2.062867ms) Jun 29 14:23:57.095: INFO: (14) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 2.095459ms) Jun 29 14:23:57.098: INFO: (14) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 5.392008ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 5.600094ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 5.734339ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 5.760533ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 5.790491ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 5.773399ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 5.903286ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.894837ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: test<... (200; 6.240986ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 6.238688ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 6.199683ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.211344ms) Jun 29 14:23:57.099: INFO: (14) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.30397ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 18.340805ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 18.36816ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 18.761619ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 18.752036ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 18.806106ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 18.956947ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 18.957905ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 18.966279ms) Jun 29 14:23:57.118: INFO: (15) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 5.638655ms) Jun 29 14:23:57.125: INFO: (16) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 5.669888ms) Jun 29 14:23:57.125: INFO: (16) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.831347ms) Jun 29 14:23:57.125: INFO: (16) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 5.923159ms) Jun 29 14:23:57.125: INFO: (16) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 6.006517ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 6.227881ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 6.332808ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 6.424619ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.410595ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 6.410194ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 6.519836ms) Jun 29 14:23:57.126: INFO: (16) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 6.55853ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 4.594478ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.681503ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 4.695909ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 5.003022ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 5.00445ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 5.05285ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 5.047285ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.360575ms) Jun 29 14:23:57.131: INFO: (17) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 5.431686ms) Jun 29 14:23:57.132: INFO: (17) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: ... (200; 4.296811ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 4.814921ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname2/proxy/: bar (200; 4.947011ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname1/proxy/: foo (200; 5.018794ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/http:proxy-service-d5dvz:portname2/proxy/: bar (200; 4.932899ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/proxy-service-d5dvz:portname1/proxy/: foo (200; 5.030196ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname2/proxy/: tls qux (200; 5.006495ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 5.11413ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 5.080317ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/services/https:proxy-service-d5dvz:tlsportname1/proxy/: tls baz (200; 5.248378ms) Jun 29 14:23:57.137: INFO: (18) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 5.265772ms) Jun 29 14:23:57.140: INFO: (19) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:460/proxy/: tls baz (200; 2.396806ms) Jun 29 14:23:57.140: INFO: (19) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:1080/proxy/: test<... (200; 2.883723ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 3.172058ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:1080/proxy/: ... (200; 3.338888ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 3.318241ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8/proxy/: test (200; 3.461023ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/proxy-service-d5dvz-7drm8:162/proxy/: bar (200; 3.380485ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/http:proxy-service-d5dvz-7drm8:160/proxy/: foo (200; 3.547537ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:462/proxy/: tls qux (200; 3.703121ms) Jun 29 14:23:57.141: INFO: (19) /api/v1/namespaces/proxy-3487/pods/https:proxy-service-d5dvz-7drm8:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 29 14:24:08.151: INFO: Waiting up to 5m0s for pod "client-containers-352af4be-33bd-40c0-b273-a26abbed50c1" in namespace "containers-542" to be "success or failure" Jun 29 14:24:08.157: INFO: Pod "client-containers-352af4be-33bd-40c0-b273-a26abbed50c1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.775278ms Jun 29 14:24:10.160: INFO: Pod "client-containers-352af4be-33bd-40c0-b273-a26abbed50c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009186688s Jun 29 14:24:12.165: INFO: Pod "client-containers-352af4be-33bd-40c0-b273-a26abbed50c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013435658s STEP: Saw pod success Jun 29 14:24:12.165: INFO: Pod "client-containers-352af4be-33bd-40c0-b273-a26abbed50c1" satisfied condition "success or failure" Jun 29 14:24:12.167: INFO: Trying to get logs from node iruya-worker pod client-containers-352af4be-33bd-40c0-b273-a26abbed50c1 container test-container: STEP: delete the pod Jun 29 14:24:12.187: INFO: Waiting for pod client-containers-352af4be-33bd-40c0-b273-a26abbed50c1 to disappear Jun 29 14:24:12.219: INFO: Pod client-containers-352af4be-33bd-40c0-b273-a26abbed50c1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:24:12.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-542" for this suite. Jun 29 14:24:18.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:24:18.318: INFO: namespace containers-542 deletion completed in 6.095458178s • [SLOW TEST:10.284 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:24:18.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0629 14:24:19.476453 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 14:24:19.476: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:24:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7122" for this suite. Jun 29 14:24:25.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:24:25.615: INFO: namespace gc-7122 deletion completed in 6.13657776s • [SLOW TEST:7.297 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:24:25.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 in namespace container-probe-8071 Jun 29 14:24:29.745: INFO: Started pod liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 in namespace container-probe-8071 STEP: checking the pod's current state and verifying that restartCount is present Jun 29 14:24:29.748: INFO: Initial restart count of pod liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is 0 Jun 29 14:24:45.786: INFO: Restart count of pod container-probe-8071/liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is now 1 (16.037958577s elapsed) Jun 29 14:25:05.830: INFO: Restart count of pod container-probe-8071/liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is now 2 (36.081607118s elapsed) Jun 29 14:25:25.882: INFO: Restart count of pod container-probe-8071/liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is now 3 (56.134461797s elapsed) Jun 29 14:25:45.987: INFO: Restart count of pod container-probe-8071/liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is now 4 (1m16.23880566s elapsed) Jun 29 14:26:56.268: INFO: Restart count of pod container-probe-8071/liveness-012537e9-41ef-4231-82fc-59c1c1eb3904 is now 5 (2m26.520135473s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:26:56.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8071" for this suite. Jun 29 14:27:02.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:27:02.401: INFO: namespace container-probe-8071 deletion completed in 6.113989532s • [SLOW TEST:156.784 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:27:02.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 29 14:27:02.469: INFO: Waiting up to 5m0s for pod "pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3" in namespace "emptydir-6266" to be "success or failure" Jun 29 14:27:02.487: INFO: Pod "pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.155776ms Jun 29 14:27:04.491: INFO: Pod "pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022099763s Jun 29 14:27:06.496: INFO: Pod "pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026796733s STEP: Saw pod success Jun 29 14:27:06.496: INFO: Pod "pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3" satisfied condition "success or failure" Jun 29 14:27:06.500: INFO: Trying to get logs from node iruya-worker2 pod pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3 container test-container: STEP: delete the pod Jun 29 14:27:06.539: INFO: Waiting for pod pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3 to disappear Jun 29 14:27:06.550: INFO: Pod pod-92253f4d-84b1-480b-bd6d-39e8552ed2e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:27:06.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6266" for this suite. Jun 29 14:27:12.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:27:12.692: INFO: namespace emptydir-6266 deletion completed in 6.138000212s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:27:12.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6102 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 29 14:27:12.780: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 29 14:27:32.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:8080/dial?request=hostName&protocol=udp&host=10.244.2.76&port=8081&tries=1'] Namespace:pod-network-test-6102 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 14:27:32.928: INFO: >>> kubeConfig: /root/.kube/config I0629 14:27:32.963488 6 log.go:172] (0xc001ea8210) (0xc0026e80a0) Create stream I0629 14:27:32.963514 6 log.go:172] (0xc001ea8210) (0xc0026e80a0) Stream added, broadcasting: 1 I0629 14:27:32.966116 6 log.go:172] (0xc001ea8210) Reply frame received for 1 I0629 14:27:32.966181 6 log.go:172] (0xc001ea8210) (0xc0003fd5e0) Create stream I0629 14:27:32.966208 6 log.go:172] (0xc001ea8210) (0xc0003fd5e0) Stream added, broadcasting: 3 I0629 14:27:32.967124 6 log.go:172] (0xc001ea8210) Reply frame received for 3 I0629 14:27:32.967187 6 log.go:172] (0xc001ea8210) (0xc001094280) Create stream I0629 14:27:32.967215 6 log.go:172] (0xc001ea8210) (0xc001094280) Stream added, broadcasting: 5 I0629 14:27:32.968143 6 log.go:172] (0xc001ea8210) Reply frame received for 5 I0629 14:27:33.030237 6 log.go:172] (0xc001ea8210) Data frame received for 3 I0629 14:27:33.030270 6 log.go:172] (0xc0003fd5e0) (3) Data frame handling I0629 14:27:33.030289 6 log.go:172] (0xc0003fd5e0) (3) Data frame sent I0629 14:27:33.030933 6 log.go:172] (0xc001ea8210) Data frame received for 5 I0629 14:27:33.030965 6 log.go:172] (0xc001094280) (5) Data frame handling I0629 14:27:33.030992 6 log.go:172] (0xc001ea8210) Data frame received for 3 I0629 14:27:33.031006 6 log.go:172] (0xc0003fd5e0) (3) Data frame handling I0629 14:27:33.032612 6 log.go:172] (0xc001ea8210) Data frame received for 1 I0629 14:27:33.032634 6 log.go:172] (0xc0026e80a0) (1) Data frame handling I0629 14:27:33.032645 6 log.go:172] (0xc0026e80a0) (1) Data frame sent I0629 14:27:33.032661 6 log.go:172] (0xc001ea8210) (0xc0026e80a0) Stream removed, broadcasting: 1 I0629 14:27:33.032676 6 log.go:172] (0xc001ea8210) Go away received I0629 14:27:33.032802 6 log.go:172] (0xc001ea8210) (0xc0026e80a0) Stream removed, broadcasting: 1 I0629 14:27:33.032821 6 log.go:172] (0xc001ea8210) (0xc0003fd5e0) Stream removed, broadcasting: 3 I0629 14:27:33.032833 6 log.go:172] (0xc001ea8210) (0xc001094280) Stream removed, broadcasting: 5 Jun 29 14:27:33.032: INFO: Waiting for endpoints: map[] Jun 29 14:27:33.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:8080/dial?request=hostName&protocol=udp&host=10.244.1.127&port=8081&tries=1'] Namespace:pod-network-test-6102 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 29 14:27:33.036: INFO: >>> kubeConfig: /root/.kube/config I0629 14:27:33.065885 6 log.go:172] (0xc000a0f810) (0xc001566000) Create stream I0629 14:27:33.065913 6 log.go:172] (0xc000a0f810) (0xc001566000) Stream added, broadcasting: 1 I0629 14:27:33.067707 6 log.go:172] (0xc000a0f810) Reply frame received for 1 I0629 14:27:33.067745 6 log.go:172] (0xc000a0f810) (0xc0012292c0) Create stream I0629 14:27:33.067761 6 log.go:172] (0xc000a0f810) (0xc0012292c0) Stream added, broadcasting: 3 I0629 14:27:33.068673 6 log.go:172] (0xc000a0f810) Reply frame received for 3 I0629 14:27:33.068708 6 log.go:172] (0xc000a0f810) (0xc001229360) Create stream I0629 14:27:33.068718 6 log.go:172] (0xc000a0f810) (0xc001229360) Stream added, broadcasting: 5 I0629 14:27:33.069961 6 log.go:172] (0xc000a0f810) Reply frame received for 5 I0629 14:27:33.124695 6 log.go:172] (0xc000a0f810) Data frame received for 3 I0629 14:27:33.124735 6 log.go:172] (0xc0012292c0) (3) Data frame handling I0629 14:27:33.124766 6 log.go:172] (0xc0012292c0) (3) Data frame sent I0629 14:27:33.125824 6 log.go:172] (0xc000a0f810) Data frame received for 5 I0629 14:27:33.125843 6 log.go:172] (0xc001229360) (5) Data frame handling I0629 14:27:33.125868 6 log.go:172] (0xc000a0f810) Data frame received for 3 I0629 14:27:33.125877 6 log.go:172] (0xc0012292c0) (3) Data frame handling I0629 14:27:33.127580 6 log.go:172] (0xc000a0f810) Data frame received for 1 I0629 14:27:33.127614 6 log.go:172] (0xc001566000) (1) Data frame handling I0629 14:27:33.127639 6 log.go:172] (0xc001566000) (1) Data frame sent I0629 14:27:33.127656 6 log.go:172] (0xc000a0f810) (0xc001566000) Stream removed, broadcasting: 1 I0629 14:27:33.127673 6 log.go:172] (0xc000a0f810) Go away received I0629 14:27:33.127797 6 log.go:172] (0xc000a0f810) (0xc001566000) Stream removed, broadcasting: 1 I0629 14:27:33.127822 6 log.go:172] (0xc000a0f810) (0xc0012292c0) Stream removed, broadcasting: 3 I0629 14:27:33.127835 6 log.go:172] (0xc000a0f810) (0xc001229360) Stream removed, broadcasting: 5 Jun 29 14:27:33.127: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:27:33.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6102" for this suite. Jun 29 14:27:55.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:27:55.219: INFO: namespace pod-network-test-6102 deletion completed in 22.087458154s • [SLOW TEST:42.526 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:27:55.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:27:55.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac" in namespace "projected-58" to be "success or failure" Jun 29 14:27:55.306: INFO: Pod "downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633194ms Jun 29 14:27:57.310: INFO: Pod "downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007883925s Jun 29 14:27:59.315: INFO: Pod "downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01287621s STEP: Saw pod success Jun 29 14:27:59.316: INFO: Pod "downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac" satisfied condition "success or failure" Jun 29 14:27:59.319: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac container client-container: STEP: delete the pod Jun 29 14:27:59.338: INFO: Waiting for pod downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac to disappear Jun 29 14:27:59.342: INFO: Pod downwardapi-volume-8d6ab87f-5fab-4815-b29f-a2bb03b6f3ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:27:59.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-58" for this suite. Jun 29 14:28:05.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:28:05.456: INFO: namespace projected-58 deletion completed in 6.111064641s • [SLOW TEST:10.237 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:28:05.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:28:05.528: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 29 14:28:10.532: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 29 14:28:10.532: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 29 14:28:12.536: INFO: Creating deployment "test-rollover-deployment" Jun 29 14:28:12.558: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 29 14:28:14.565: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 29 14:28:14.572: INFO: Ensure that both replica sets have 1 created replica Jun 29 14:28:14.578: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 29 14:28:14.584: INFO: Updating deployment test-rollover-deployment Jun 29 14:28:14.584: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 29 14:28:16.596: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 29 14:28:16.603: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 29 14:28:16.609: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:16.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037694, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:18.619: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:18.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:20.618: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:20.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:22.618: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:22.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:24.618: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:24.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:26.618: INFO: all replica sets need to contain the pod-template-hash label Jun 29 14:28:26.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729037692, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 29 14:28:28.617: INFO: Jun 29 14:28:28.617: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 29 14:28:28.623: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3372,SelfLink:/apis/apps/v1/namespaces/deployment-3372/deployments/test-rollover-deployment,UID:f96a9a97-b464-49e0-af5d-7f53fc29c61f,ResourceVersion:19122850,Generation:2,CreationTimestamp:2020-06-29 14:28:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-29 14:28:12 +0000 UTC 2020-06-29 14:28:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-29 14:28:27 +0000 UTC 2020-06-29 14:28:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 29 14:28:28.626: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3372,SelfLink:/apis/apps/v1/namespaces/deployment-3372/replicasets/test-rollover-deployment-854595fc44,UID:da36fe35-b327-4b27-8efa-4acadf24427c,ResourceVersion:19122839,Generation:2,CreationTimestamp:2020-06-29 14:28:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f96a9a97-b464-49e0-af5d-7f53fc29c61f 0xc001fd6237 0xc001fd6238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 29 14:28:28.626: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 29 14:28:28.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3372,SelfLink:/apis/apps/v1/namespaces/deployment-3372/replicasets/test-rollover-controller,UID:3684dd23-8211-4a91-b296-54c58d8f8ed3,ResourceVersion:19122848,Generation:2,CreationTimestamp:2020-06-29 14:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f96a9a97-b464-49e0-af5d-7f53fc29c61f 0xc001fd6167 0xc001fd6168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 14:28:28.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3372,SelfLink:/apis/apps/v1/namespaces/deployment-3372/replicasets/test-rollover-deployment-9b8b997cf,UID:759e8b60-4d02-4e33-a334-be4f0cea6568,ResourceVersion:19122807,Generation:2,CreationTimestamp:2020-06-29 14:28:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f96a9a97-b464-49e0-af5d-7f53fc29c61f 0xc001fd6300 0xc001fd6301}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 29 14:28:28.628: INFO: Pod "test-rollover-deployment-854595fc44-k5qhv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-k5qhv,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3372,SelfLink:/api/v1/namespaces/deployment-3372/pods/test-rollover-deployment-854595fc44-k5qhv,UID:c2b33fa2-b455-4f9e-b8d0-6fa829e26ef7,ResourceVersion:19122817,Generation:0,CreationTimestamp:2020-06-29 14:28:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 da36fe35-b327-4b27-8efa-4acadf24427c 0xc002cc71b7 0xc002cc71b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jgd5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jgd5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jgd5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc7230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc7250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:28:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:28:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-29 14:28:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.80,StartTime:2020-06-29 14:28:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-29 14:28:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a21bf91030be7c8a8a464717e8f89259da34951db82aeadcebb9e44c81bf9530}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:28:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3372" for this suite. Jun 29 14:28:34.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:28:34.865: INFO: namespace deployment-3372 deletion completed in 6.233913468s • [SLOW TEST:29.408 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:28:34.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 29 14:28:41.800: INFO: 0 pods remaining Jun 29 14:28:41.800: INFO: 0 pods has nil DeletionTimestamp Jun 29 14:28:41.800: INFO: Jun 29 14:28:42.487: INFO: 0 pods remaining Jun 29 14:28:42.487: INFO: 0 pods has nil DeletionTimestamp Jun 29 14:28:42.487: INFO: STEP: Gathering metrics W0629 14:28:43.303750 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 14:28:43.303: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:28:43.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2155" for this suite. Jun 29 14:28:49.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:28:49.672: INFO: namespace gc-2155 deletion completed in 6.364249851s • [SLOW TEST:14.807 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:28:49.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 29 14:28:53.789: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:28:53.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-397" for this suite. Jun 29 14:28:59.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:29:00.042: INFO: namespace container-runtime-397 deletion completed in 6.100505432s • [SLOW TEST:10.369 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:29:00.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:29:00.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96" in namespace "downward-api-9916" to be "success or failure" Jun 29 14:29:00.168: INFO: Pod "downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96": Phase="Pending", Reason="", readiness=false. Elapsed: 17.446169ms Jun 29 14:29:02.173: INFO: Pod "downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02173721s Jun 29 14:29:04.178: INFO: Pod "downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026786822s STEP: Saw pod success Jun 29 14:29:04.178: INFO: Pod "downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96" satisfied condition "success or failure" Jun 29 14:29:04.180: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96 container client-container: STEP: delete the pod Jun 29 14:29:04.204: INFO: Waiting for pod downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96 to disappear Jun 29 14:29:04.208: INFO: Pod downwardapi-volume-91918098-27b1-4d27-a0d0-34873d387a96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:29:04.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9916" for this suite. Jun 29 14:29:10.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:29:10.311: INFO: namespace downward-api-9916 deletion completed in 6.099191425s • [SLOW TEST:10.268 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:29:10.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 29 14:29:10.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8076' Jun 29 14:29:13.300: INFO: stderr: "" Jun 29 14:29:13.300: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 29 14:29:13.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8076' Jun 29 14:29:13.622: INFO: stderr: "" Jun 29 14:29:13.622: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 29 14:29:14.628: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:29:14.628: INFO: Found 0 / 1 Jun 29 14:29:15.628: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:29:15.628: INFO: Found 0 / 1 Jun 29 14:29:16.627: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:29:16.627: INFO: Found 1 / 1 Jun 29 14:29:16.627: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 29 14:29:16.631: INFO: Selector matched 1 pods for map[app:redis] Jun 29 14:29:16.631: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 29 14:29:16.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-45d74 --namespace=kubectl-8076' Jun 29 14:29:16.748: INFO: stderr: "" Jun 29 14:29:16.748: INFO: stdout: "Name: redis-master-45d74\nNamespace: kubectl-8076\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Mon, 29 Jun 2020 14:29:13 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.135\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://483148439b859088e3fcfe123a1b45c011e9a4fe3e9af08a5032003330c9baec\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 29 Jun 2020 14:29:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-r4zn5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-r4zn5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-r4zn5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-8076/redis-master-45d74 to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 0s kubelet, iruya-worker2 Started container redis-master\n" Jun 29 14:29:16.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8076' Jun 29 14:29:16.856: INFO: stderr: "" Jun 29 14:29:16.856: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8076\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-45d74\n" Jun 29 14:29:16.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8076' Jun 29 14:29:16.952: INFO: stderr: "" Jun 29 14:29:16.953: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8076\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.14.133\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.135:6379\nSession Affinity: None\nEvents: \n" Jun 29 14:29:16.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 29 14:29:17.097: INFO: stderr: "" Jun 29 14:29:17.097: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 29 Jun 2020 14:29:00 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 29 Jun 2020 14:29:00 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 29 Jun 2020 14:29:00 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 29 Jun 2020 14:29:00 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 105d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 29 14:29:17.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8076' Jun 29 14:29:17.209: INFO: stderr: "" Jun 29 14:29:17.209: INFO: stdout: "Name: kubectl-8076\nLabels: e2e-framework=kubectl\n e2e-run=4854ca73-ad24-4e23-b955-7a339d8f45af\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:29:17.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8076" for this suite. Jun 29 14:29:39.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:29:39.331: INFO: namespace kubectl-8076 deletion completed in 22.117049194s • [SLOW TEST:29.019 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:29:39.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d9524d7e-2d77-44a5-991f-9b57c2f33106 STEP: Creating a pod to test consume secrets Jun 29 14:29:39.406: INFO: Waiting up to 5m0s for pod "pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8" in namespace "secrets-4355" to be "success or failure" Jun 29 14:29:39.420: INFO: Pod "pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.315791ms Jun 29 14:29:41.425: INFO: Pod "pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019232746s Jun 29 14:29:43.430: INFO: Pod "pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024011579s STEP: Saw pod success Jun 29 14:29:43.430: INFO: Pod "pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8" satisfied condition "success or failure" Jun 29 14:29:43.433: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8 container secret-volume-test: STEP: delete the pod Jun 29 14:29:43.468: INFO: Waiting for pod pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8 to disappear Jun 29 14:29:43.488: INFO: Pod pod-secrets-67caed67-bd3c-421b-9a71-de6949a01ad8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:29:43.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4355" for this suite. Jun 29 14:29:49.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:29:49.578: INFO: namespace secrets-4355 deletion completed in 6.086403542s • [SLOW TEST:10.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:29:49.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-54e075d2-2568-4420-a659-e6e97db6c060 STEP: Creating secret with name s-test-opt-upd-70913dcf-cd4c-45d2-b668-de2d1e14e0db STEP: Creating the pod STEP: Deleting secret s-test-opt-del-54e075d2-2568-4420-a659-e6e97db6c060 STEP: Updating secret s-test-opt-upd-70913dcf-cd4c-45d2-b668-de2d1e14e0db STEP: Creating secret with name s-test-opt-create-9737e1d5-cd54-458c-b8be-0ae85f007190 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:31:18.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8314" for this suite. Jun 29 14:31:40.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:31:40.461: INFO: namespace projected-8314 deletion completed in 22.105887784s • [SLOW TEST:110.883 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:31:40.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:31:40.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b" in namespace "downward-api-5306" to be "success or failure" Jun 29 14:31:40.592: INFO: Pod "downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.730693ms Jun 29 14:31:42.609: INFO: Pod "downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038230989s Jun 29 14:31:44.614: INFO: Pod "downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042852431s STEP: Saw pod success Jun 29 14:31:44.614: INFO: Pod "downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b" satisfied condition "success or failure" Jun 29 14:31:44.617: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b container client-container: STEP: delete the pod Jun 29 14:31:44.650: INFO: Waiting for pod downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b to disappear Jun 29 14:31:44.660: INFO: Pod downwardapi-volume-796a1405-31ba-411f-b375-cec3f2bcb28b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:31:44.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5306" for this suite. Jun 29 14:31:50.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:31:50.781: INFO: namespace downward-api-5306 deletion completed in 6.117264389s • [SLOW TEST:10.319 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:31:50.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-46197e52-275f-41bc-a957-d3a9b174b8fd STEP: Creating a pod to test consume secrets Jun 29 14:31:50.943: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad" in namespace "projected-8905" to be "success or failure" Jun 29 14:31:50.946: INFO: Pod "pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.189653ms Jun 29 14:31:52.950: INFO: Pod "pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007136459s Jun 29 14:31:54.957: INFO: Pod "pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013997628s STEP: Saw pod success Jun 29 14:31:54.957: INFO: Pod "pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad" satisfied condition "success or failure" Jun 29 14:31:54.960: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad container secret-volume-test: STEP: delete the pod Jun 29 14:31:54.992: INFO: Waiting for pod pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad to disappear Jun 29 14:31:55.012: INFO: Pod pod-projected-secrets-8f0b3d59-cb74-462b-b255-96d35f92adad no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:31:55.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8905" for this suite. Jun 29 14:32:01.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:01.371: INFO: namespace projected-8905 deletion completed in 6.352662217s • [SLOW TEST:10.590 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:01.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8984/configmap-test-f6114bd3-b18e-4950-8637-8a108e156145 STEP: Creating a pod to test consume configMaps Jun 29 14:32:01.479: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f" in namespace "configmap-8984" to be "success or failure" Jun 29 14:32:01.486: INFO: Pod "pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.880311ms Jun 29 14:32:03.489: INFO: Pod "pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009885117s Jun 29 14:32:05.493: INFO: Pod "pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014220012s STEP: Saw pod success Jun 29 14:32:05.493: INFO: Pod "pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f" satisfied condition "success or failure" Jun 29 14:32:05.496: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f container env-test: STEP: delete the pod Jun 29 14:32:05.534: INFO: Waiting for pod pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f to disappear Jun 29 14:32:05.545: INFO: Pod pod-configmaps-dd31762f-e142-4eba-8a60-bcbbfbd0595f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8984" for this suite. Jun 29 14:32:11.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:11.650: INFO: namespace configmap-8984 deletion completed in 6.087865093s • [SLOW TEST:10.278 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:11.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-08ad707b-d905-479d-bbc4-fd9cdd9620c0 STEP: Creating a pod to test consume configMaps Jun 29 14:32:11.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894" in namespace "projected-3185" to be "success or failure" Jun 29 14:32:11.763: INFO: Pod "pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894": Phase="Pending", Reason="", readiness=false. Elapsed: 17.142255ms Jun 29 14:32:13.767: INFO: Pod "pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02154616s Jun 29 14:32:15.771: INFO: Pod "pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025903816s STEP: Saw pod success Jun 29 14:32:15.771: INFO: Pod "pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894" satisfied condition "success or failure" Jun 29 14:32:15.774: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894 container projected-configmap-volume-test: STEP: delete the pod Jun 29 14:32:15.820: INFO: Waiting for pod pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894 to disappear Jun 29 14:32:15.828: INFO: Pod pod-projected-configmaps-8becf16b-ec80-4735-b695-e30cc6879894 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:15.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3185" for this suite. Jun 29 14:32:21.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:21.962: INFO: namespace projected-3185 deletion completed in 6.130109945s • [SLOW TEST:10.312 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:21.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-94c02d53-dbd7-4246-b2ce-eb07480e5d0d STEP: Creating a pod to test consume secrets Jun 29 14:32:22.067: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1" in namespace "projected-1203" to be "success or failure" Jun 29 14:32:22.106: INFO: Pod "pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.282626ms Jun 29 14:32:24.166: INFO: Pod "pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098528229s Jun 29 14:32:26.184: INFO: Pod "pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116671172s STEP: Saw pod success Jun 29 14:32:26.184: INFO: Pod "pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1" satisfied condition "success or failure" Jun 29 14:32:26.187: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1 container projected-secret-volume-test: STEP: delete the pod Jun 29 14:32:26.213: INFO: Waiting for pod pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1 to disappear Jun 29 14:32:26.231: INFO: Pod pod-projected-secrets-66bd661b-5175-4b48-adc7-f68e149c0ad1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:26.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1203" for this suite. Jun 29 14:32:32.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:32.330: INFO: namespace projected-1203 deletion completed in 6.095755771s • [SLOW TEST:10.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:32.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:32:32.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e" in namespace "downward-api-6603" to be "success or failure" Jun 29 14:32:32.413: INFO: Pod "downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.502498ms Jun 29 14:32:34.417: INFO: Pod "downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025177257s Jun 29 14:32:36.422: INFO: Pod "downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02994109s STEP: Saw pod success Jun 29 14:32:36.422: INFO: Pod "downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e" satisfied condition "success or failure" Jun 29 14:32:36.425: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e container client-container: STEP: delete the pod Jun 29 14:32:36.446: INFO: Waiting for pod downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e to disappear Jun 29 14:32:36.464: INFO: Pod downwardapi-volume-ea494f11-2dd3-4740-abe6-056a8000555e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:36.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6603" for this suite. Jun 29 14:32:42.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:42.617: INFO: namespace downward-api-6603 deletion completed in 6.146387074s • [SLOW TEST:10.287 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:42.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 29 14:32:42.691: INFO: Waiting up to 5m0s for pod "var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b" in namespace "var-expansion-9993" to be "success or failure" Jun 29 14:32:42.772: INFO: Pod "var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b": Phase="Pending", Reason="", readiness=false. Elapsed: 80.045955ms Jun 29 14:32:44.776: INFO: Pod "var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084265936s Jun 29 14:32:46.781: INFO: Pod "var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089021735s STEP: Saw pod success Jun 29 14:32:46.781: INFO: Pod "var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b" satisfied condition "success or failure" Jun 29 14:32:46.784: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b container dapi-container: STEP: delete the pod Jun 29 14:32:46.819: INFO: Waiting for pod var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b to disappear Jun 29 14:32:46.852: INFO: Pod var-expansion-9c8bb7a8-657a-413d-9b8b-214d2613203b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:46.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9993" for this suite. Jun 29 14:32:52.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:32:53.057: INFO: namespace var-expansion-9993 deletion completed in 6.201718749s • [SLOW TEST:10.440 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:32:53.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 29 14:32:53.203: INFO: Waiting up to 5m0s for pod "pod-c5684053-3b92-450f-b9f6-a1facdfc38a2" in namespace "emptydir-7017" to be "success or failure" Jun 29 14:32:53.218: INFO: Pod "pod-c5684053-3b92-450f-b9f6-a1facdfc38a2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.230184ms Jun 29 14:32:55.222: INFO: Pod "pod-c5684053-3b92-450f-b9f6-a1facdfc38a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019190224s Jun 29 14:32:57.226: INFO: Pod "pod-c5684053-3b92-450f-b9f6-a1facdfc38a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02268s STEP: Saw pod success Jun 29 14:32:57.226: INFO: Pod "pod-c5684053-3b92-450f-b9f6-a1facdfc38a2" satisfied condition "success or failure" Jun 29 14:32:57.228: INFO: Trying to get logs from node iruya-worker pod pod-c5684053-3b92-450f-b9f6-a1facdfc38a2 container test-container: STEP: delete the pod Jun 29 14:32:57.257: INFO: Waiting for pod pod-c5684053-3b92-450f-b9f6-a1facdfc38a2 to disappear Jun 29 14:32:57.278: INFO: Pod pod-c5684053-3b92-450f-b9f6-a1facdfc38a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:32:57.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7017" for this suite. Jun 29 14:33:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:33:03.378: INFO: namespace emptydir-7017 deletion completed in 6.095660201s • [SLOW TEST:10.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:33:03.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-713c28d8-c5a0-42e3-ae7a-4072343a0dbc in namespace container-probe-5957 Jun 29 14:33:07.529: INFO: Started pod test-webserver-713c28d8-c5a0-42e3-ae7a-4072343a0dbc in namespace container-probe-5957 STEP: checking the pod's current state and verifying that restartCount is present Jun 29 14:33:07.531: INFO: Initial restart count of pod test-webserver-713c28d8-c5a0-42e3-ae7a-4072343a0dbc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:08.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5957" for this suite. Jun 29 14:37:14.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:37:14.294: INFO: namespace container-probe-5957 deletion completed in 6.095785165s • [SLOW TEST:250.916 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:37:14.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-312dc0c0-24bd-4f8d-8841-b2dffba9c990 STEP: Creating a pod to test consume secrets Jun 29 14:37:14.356: INFO: Waiting up to 5m0s for pod "pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb" in namespace "secrets-6962" to be "success or failure" Jun 29 14:37:14.360: INFO: Pod "pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.916509ms Jun 29 14:37:16.364: INFO: Pod "pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008015911s Jun 29 14:37:18.368: INFO: Pod "pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011969148s STEP: Saw pod success Jun 29 14:37:18.368: INFO: Pod "pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb" satisfied condition "success or failure" Jun 29 14:37:18.371: INFO: Trying to get logs from node iruya-worker pod pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb container secret-volume-test: STEP: delete the pod Jun 29 14:37:18.410: INFO: Waiting for pod pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb to disappear Jun 29 14:37:18.414: INFO: Pod pod-secrets-ed49e565-3593-4641-bb7f-3d732b456fcb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:18.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6962" for this suite. Jun 29 14:37:24.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:37:24.534: INFO: namespace secrets-6962 deletion completed in 6.116545488s • [SLOW TEST:10.239 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:37:24.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0a87a74c-5662-4c09-8bc9-6cb2e74921a3 STEP: Creating a pod to test consume configMaps Jun 29 14:37:24.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e" in namespace "configmap-3406" to be "success or failure" Jun 29 14:37:24.612: INFO: Pod "pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.585226ms Jun 29 14:37:26.617: INFO: Pod "pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008385802s Jun 29 14:37:28.621: INFO: Pod "pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012552762s STEP: Saw pod success Jun 29 14:37:28.621: INFO: Pod "pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e" satisfied condition "success or failure" Jun 29 14:37:28.623: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e container configmap-volume-test: STEP: delete the pod Jun 29 14:37:28.643: INFO: Waiting for pod pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e to disappear Jun 29 14:37:28.648: INFO: Pod pod-configmaps-b8cc8243-b277-495e-ad69-99dede4c8a1e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:28.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3406" for this suite. Jun 29 14:37:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:37:34.749: INFO: namespace configmap-3406 deletion completed in 6.098281433s • [SLOW TEST:10.215 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:37:34.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 29 14:37:34.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-264,SelfLink:/api/v1/namespaces/watch-264/configmaps/e2e-watch-test-watch-closed,UID:5082bc04-f4d4-4d4c-b96b-364cd20539d9,ResourceVersion:19124520,Generation:0,CreationTimestamp:2020-06-29 14:37:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 29 14:37:34.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-264,SelfLink:/api/v1/namespaces/watch-264/configmaps/e2e-watch-test-watch-closed,UID:5082bc04-f4d4-4d4c-b96b-364cd20539d9,ResourceVersion:19124521,Generation:0,CreationTimestamp:2020-06-29 14:37:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 29 14:37:34.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-264,SelfLink:/api/v1/namespaces/watch-264/configmaps/e2e-watch-test-watch-closed,UID:5082bc04-f4d4-4d4c-b96b-364cd20539d9,ResourceVersion:19124522,Generation:0,CreationTimestamp:2020-06-29 14:37:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 29 14:37:34.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-264,SelfLink:/api/v1/namespaces/watch-264/configmaps/e2e-watch-test-watch-closed,UID:5082bc04-f4d4-4d4c-b96b-364cd20539d9,ResourceVersion:19124523,Generation:0,CreationTimestamp:2020-06-29 14:37:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:34.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-264" for this suite. Jun 29 14:37:40.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:37:41.009: INFO: namespace watch-264 deletion completed in 6.097752022s • [SLOW TEST:6.260 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:37:41.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-0c360a1d-3827-43e5-9738-6e17a0e46058 STEP: Creating a pod to test consume secrets Jun 29 14:37:41.068: INFO: Waiting up to 5m0s for pod "pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17" in namespace "secrets-3254" to be "success or failure" Jun 29 14:37:41.072: INFO: Pod "pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473034ms Jun 29 14:37:43.076: INFO: Pod "pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007540609s Jun 29 14:37:45.080: INFO: Pod "pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011607681s STEP: Saw pod success Jun 29 14:37:45.080: INFO: Pod "pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17" satisfied condition "success or failure" Jun 29 14:37:45.083: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17 container secret-volume-test: STEP: delete the pod Jun 29 14:37:45.104: INFO: Waiting for pod pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17 to disappear Jun 29 14:37:45.108: INFO: Pod pod-secrets-79e0aef3-86f8-4575-8f1a-bea54d2bfa17 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:45.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3254" for this suite. Jun 29 14:37:51.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:37:51.241: INFO: namespace secrets-3254 deletion completed in 6.129294621s • [SLOW TEST:10.231 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:37:51.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 29 14:37:51.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33" in namespace "projected-3812" to be "success or failure" Jun 29 14:37:51.348: INFO: Pod "downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33": Phase="Pending", Reason="", readiness=false. Elapsed: 3.283226ms Jun 29 14:37:53.353: INFO: Pod "downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008076003s Jun 29 14:37:55.357: INFO: Pod "downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011691088s STEP: Saw pod success Jun 29 14:37:55.357: INFO: Pod "downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33" satisfied condition "success or failure" Jun 29 14:37:55.359: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33 container client-container: STEP: delete the pod Jun 29 14:37:55.388: INFO: Waiting for pod downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33 to disappear Jun 29 14:37:55.397: INFO: Pod downwardapi-volume-f440a982-c91b-43d5-8bc9-55b63dadcd33 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:37:55.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3812" for this suite. Jun 29 14:38:01.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:38:01.496: INFO: namespace projected-3812 deletion completed in 6.095772212s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:38:01.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 29 14:38:01.527: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 29 14:38:01.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:01.842: INFO: stderr: "" Jun 29 14:38:01.842: INFO: stdout: "service/redis-slave created\n" Jun 29 14:38:01.842: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 29 14:38:01.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:02.202: INFO: stderr: "" Jun 29 14:38:02.202: INFO: stdout: "service/redis-master created\n" Jun 29 14:38:02.202: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 29 14:38:02.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:02.543: INFO: stderr: "" Jun 29 14:38:02.543: INFO: stdout: "service/frontend created\n" Jun 29 14:38:02.543: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 29 14:38:02.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:02.820: INFO: stderr: "" Jun 29 14:38:02.820: INFO: stdout: "deployment.apps/frontend created\n" Jun 29 14:38:02.820: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 29 14:38:02.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:03.139: INFO: stderr: "" Jun 29 14:38:03.139: INFO: stdout: "deployment.apps/redis-master created\n" Jun 29 14:38:03.140: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 29 14:38:03.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6293' Jun 29 14:38:03.474: INFO: stderr: "" Jun 29 14:38:03.474: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 29 14:38:03.474: INFO: Waiting for all frontend pods to be Running. Jun 29 14:38:13.525: INFO: Waiting for frontend to serve content. Jun 29 14:38:13.575: INFO: Trying to add a new entry to the guestbook. Jun 29 14:38:13.608: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 29 14:38:13.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:13.771: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:13.771: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 29 14:38:13.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:13.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:13.924: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 29 14:38:13.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:14.086: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:14.086: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 29 14:38:14.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:14.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:14.219: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 29 14:38:14.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:14.345: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:14.345: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 29 14:38:14.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6293' Jun 29 14:38:14.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 29 14:38:14.490: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:38:14.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6293" for this suite. Jun 29 14:38:54.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:38:54.652: INFO: namespace kubectl-6293 deletion completed in 40.138424179s • [SLOW TEST:53.156 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:38:54.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 29 14:38:54.780: INFO: Waiting up to 5m0s for pod "pod-9a4a6459-6005-4034-b072-714cb11d0236" in namespace "emptydir-8133" to be "success or failure" Jun 29 14:38:54.795: INFO: Pod "pod-9a4a6459-6005-4034-b072-714cb11d0236": Phase="Pending", Reason="", readiness=false. Elapsed: 15.133539ms Jun 29 14:38:56.799: INFO: Pod "pod-9a4a6459-6005-4034-b072-714cb11d0236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01910918s Jun 29 14:38:58.802: INFO: Pod "pod-9a4a6459-6005-4034-b072-714cb11d0236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022784362s STEP: Saw pod success Jun 29 14:38:58.803: INFO: Pod "pod-9a4a6459-6005-4034-b072-714cb11d0236" satisfied condition "success or failure" Jun 29 14:38:58.805: INFO: Trying to get logs from node iruya-worker pod pod-9a4a6459-6005-4034-b072-714cb11d0236 container test-container: STEP: delete the pod Jun 29 14:38:58.839: INFO: Waiting for pod pod-9a4a6459-6005-4034-b072-714cb11d0236 to disappear Jun 29 14:38:58.843: INFO: Pod pod-9a4a6459-6005-4034-b072-714cb11d0236 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:38:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8133" for this suite. Jun 29 14:39:04.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:39:04.930: INFO: namespace emptydir-8133 deletion completed in 6.083998196s • [SLOW TEST:10.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:39:04.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0629 14:39:15.017603 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 29 14:39:15.017: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:39:15.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3093" for this suite. Jun 29 14:39:21.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:39:21.124: INFO: namespace gc-3093 deletion completed in 6.103729484s • [SLOW TEST:16.194 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:39:21.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 29 14:39:21.211: INFO: Waiting up to 5m0s for pod "pod-030e8957-b700-4d68-9b50-2ede97991256" in namespace "emptydir-9509" to be "success or failure" Jun 29 14:39:21.226: INFO: Pod "pod-030e8957-b700-4d68-9b50-2ede97991256": Phase="Pending", Reason="", readiness=false. Elapsed: 15.32905ms Jun 29 14:39:23.230: INFO: Pod "pod-030e8957-b700-4d68-9b50-2ede97991256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019552686s Jun 29 14:39:25.233: INFO: Pod "pod-030e8957-b700-4d68-9b50-2ede97991256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022767234s STEP: Saw pod success Jun 29 14:39:25.233: INFO: Pod "pod-030e8957-b700-4d68-9b50-2ede97991256" satisfied condition "success or failure" Jun 29 14:39:25.235: INFO: Trying to get logs from node iruya-worker2 pod pod-030e8957-b700-4d68-9b50-2ede97991256 container test-container: STEP: delete the pod Jun 29 14:39:25.258: INFO: Waiting for pod pod-030e8957-b700-4d68-9b50-2ede97991256 to disappear Jun 29 14:39:25.316: INFO: Pod pod-030e8957-b700-4d68-9b50-2ede97991256 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:39:25.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9509" for this suite. Jun 29 14:39:31.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:39:31.436: INFO: namespace emptydir-9509 deletion completed in 6.115270756s • [SLOW TEST:10.311 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:39:31.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 29 14:39:31.544: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:31.572: INFO: Number of nodes with available pods: 0 Jun 29 14:39:31.572: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:39:32.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:32.631: INFO: Number of nodes with available pods: 0 Jun 29 14:39:32.631: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:39:33.576: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:33.579: INFO: Number of nodes with available pods: 0 Jun 29 14:39:33.579: INFO: Node iruya-worker is running more than one daemon pod Jun 29 14:39:34.812: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:34.816: INFO: Number of nodes with available pods: 1 Jun 29 14:39:34.816: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:35.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:35.581: INFO: Number of nodes with available pods: 1 Jun 29 14:39:35.581: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:36.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:36.582: INFO: Number of nodes with available pods: 2 Jun 29 14:39:36.582: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 29 14:39:36.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:36.662: INFO: Number of nodes with available pods: 1 Jun 29 14:39:36.662: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:37.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:37.671: INFO: Number of nodes with available pods: 1 Jun 29 14:39:37.671: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:38.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:38.672: INFO: Number of nodes with available pods: 1 Jun 29 14:39:38.672: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:39.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:39.672: INFO: Number of nodes with available pods: 1 Jun 29 14:39:39.672: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:40.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:40.672: INFO: Number of nodes with available pods: 1 Jun 29 14:39:40.672: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:41.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:41.672: INFO: Number of nodes with available pods: 1 Jun 29 14:39:41.672: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:42.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:42.672: INFO: Number of nodes with available pods: 1 Jun 29 14:39:42.672: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:43.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:43.671: INFO: Number of nodes with available pods: 1 Jun 29 14:39:43.671: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:44.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:44.671: INFO: Number of nodes with available pods: 1 Jun 29 14:39:44.671: INFO: Node iruya-worker2 is running more than one daemon pod Jun 29 14:39:45.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 29 14:39:45.671: INFO: Number of nodes with available pods: 2 Jun 29 14:39:45.671: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-10, will wait for the garbage collector to delete the pods Jun 29 14:39:45.734: INFO: Deleting DaemonSet.extensions daemon-set took: 6.833568ms Jun 29 14:39:46.034: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276854ms Jun 29 14:39:52.038: INFO: Number of nodes with available pods: 0 Jun 29 14:39:52.038: INFO: Number of running nodes: 0, number of available pods: 0 Jun 29 14:39:52.107: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-10/daemonsets","resourceVersion":"19125189"},"items":null} Jun 29 14:39:52.110: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-10/pods","resourceVersion":"19125189"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:39:52.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-10" for this suite. Jun 29 14:39:58.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:39:58.220: INFO: namespace daemonsets-10 deletion completed in 6.097485242s • [SLOW TEST:26.783 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:39:58.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 29 14:39:58.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-245' Jun 29 14:40:01.091: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 29 14:40:01.091: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 29 14:40:01.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-245' Jun 29 14:40:01.267: INFO: stderr: "" Jun 29 14:40:01.267: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:40:01.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-245" for this suite. Jun 29 14:40:07.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:40:07.369: INFO: namespace kubectl-245 deletion completed in 6.094434013s • [SLOW TEST:9.149 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:40:07.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 29 14:40:07.438: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:40:13.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8350" for this suite. Jun 29 14:40:19.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:40:19.943: INFO: namespace init-container-8350 deletion completed in 6.084563895s • [SLOW TEST:12.573 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:40:19.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:40:24.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7624" for this suite. Jun 29 14:40:30.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:40:30.139: INFO: namespace kubelet-test-7624 deletion completed in 6.093026776s • [SLOW TEST:10.196 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 29 14:40:30.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 29 14:40:30.206: INFO: Waiting up to 5m0s for pod "downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb" in namespace "downward-api-4642" to be "success or failure" Jun 29 14:40:30.221: INFO: Pod "downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.535282ms Jun 29 14:40:32.226: INFO: Pod "downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019765302s Jun 29 14:40:34.230: INFO: Pod "downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024071402s STEP: Saw pod success Jun 29 14:40:34.230: INFO: Pod "downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb" satisfied condition "success or failure" Jun 29 14:40:34.233: INFO: Trying to get logs from node iruya-worker2 pod downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb container dapi-container: STEP: delete the pod Jun 29 14:40:34.262: INFO: Waiting for pod downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb to disappear Jun 29 14:40:34.347: INFO: Pod downward-api-8f50ea4c-e65d-4278-97c9-e2ad706a50bb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 29 14:40:34.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4642" for this suite. Jun 29 14:40:40.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 29 14:40:40.463: INFO: namespace downward-api-4642 deletion completed in 6.112914795s • [SLOW TEST:10.324 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSJun 29 14:40:40.463: INFO: Running AfterSuite actions on all nodes Jun 29 14:40:40.463: INFO: Running AfterSuite actions on node 1 Jun 29 14:40:40.463: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6283.064 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS