I0330 12:55:43.821959 6 e2e.go:243] Starting e2e run "0fa0561c-23b6-41b4-a4df-392116d28243" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585572942 - Will randomize all specs Will run 215 of 4412 specs Mar 30 12:55:44.006: INFO: >>> kubeConfig: /root/.kube/config Mar 30 12:55:44.008: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 30 12:55:44.032: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 30 12:55:44.070: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 30 12:55:44.070: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 30 12:55:44.070: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 30 12:55:44.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 30 12:55:44.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 30 12:55:44.079: INFO: e2e test version: v1.15.10 Mar 30 12:55:44.080: INFO: kube-apiserver version: v1.15.7 SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:55:44.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Mar 30 12:55:44.145: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 30 12:55:44.152: INFO: Waiting up to 5m0s for pod "pod-a4909930-e403-4a02-924f-f388e3c4aed1" in namespace "emptydir-523" to be "success or failure" Mar 30 12:55:44.179: INFO: Pod "pod-a4909930-e403-4a02-924f-f388e3c4aed1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.971679ms Mar 30 12:55:46.183: INFO: Pod "pod-a4909930-e403-4a02-924f-f388e3c4aed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031663763s Mar 30 12:55:48.188: INFO: Pod "pod-a4909930-e403-4a02-924f-f388e3c4aed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036031834s STEP: Saw pod success Mar 30 12:55:48.188: INFO: Pod "pod-a4909930-e403-4a02-924f-f388e3c4aed1" satisfied condition "success or failure" Mar 30 12:55:48.191: INFO: Trying to get logs from node iruya-worker2 pod pod-a4909930-e403-4a02-924f-f388e3c4aed1 container test-container: STEP: delete the pod Mar 30 12:55:48.213: INFO: Waiting for pod pod-a4909930-e403-4a02-924f-f388e3c4aed1 to disappear Mar 30 12:55:48.217: INFO: Pod pod-a4909930-e403-4a02-924f-f388e3c4aed1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:55:48.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-523" for this suite. Mar 30 12:55:54.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:55:54.330: INFO: namespace emptydir-523 deletion completed in 6.109728098s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:55:54.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 12:55:54.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8806' Mar 30 12:55:56.662: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 12:55:56.662: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 30 12:55:58.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8806' Mar 30 12:55:58.845: INFO: stderr: "" Mar 30 12:55:58.845: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:55:58.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8806" for this suite. Mar 30 12:58:00.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:58:00.938: INFO: namespace kubectl-8806 deletion completed in 2m2.089373701s • [SLOW TEST:126.607 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:58:00.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8d1d4316-c8e8-44cf-9575-7f5234b85e47 STEP: Creating a pod to test consume configMaps Mar 30 12:58:00.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de" in namespace "projected-7139" to be "success or failure" Mar 30 12:58:01.004: INFO: Pod "pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de": Phase="Pending", Reason="", readiness=false. Elapsed: 10.671126ms Mar 30 12:58:03.008: INFO: Pod "pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015170547s Mar 30 12:58:05.011: INFO: Pod "pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01859343s STEP: Saw pod success Mar 30 12:58:05.011: INFO: Pod "pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de" satisfied condition "success or failure" Mar 30 12:58:05.014: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de container projected-configmap-volume-test: STEP: delete the pod Mar 30 12:58:05.148: INFO: Waiting for pod pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de to disappear Mar 30 12:58:05.159: INFO: Pod pod-projected-configmaps-270b826c-8a67-446f-b75e-a76c027179de no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:58:05.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7139" for this suite. Mar 30 12:58:11.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:58:11.266: INFO: namespace projected-7139 deletion completed in 6.104075858s • [SLOW TEST:10.328 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:58:11.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 12:58:11.311: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:58:15.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6463" for this suite. Mar 30 12:58:53.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:58:53.557: INFO: namespace pods-6463 deletion completed in 38.094778852s • [SLOW TEST:42.291 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:58:53.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 12:58:56.656: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:58:56.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3012" for this suite. Mar 30 12:59:02.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:59:02.915: INFO: namespace container-runtime-3012 deletion completed in 6.097619992s • [SLOW TEST:9.357 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:59:02.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 30 12:59:02.971: INFO: Waiting up to 5m0s for pod "pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12" in namespace "emptydir-8342" to be "success or failure" Mar 30 12:59:02.974: INFO: Pod "pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568676ms Mar 30 12:59:04.978: INFO: Pod "pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007607296s Mar 30 12:59:06.982: INFO: Pod "pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011320763s STEP: Saw pod success Mar 30 12:59:06.982: INFO: Pod "pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12" satisfied condition "success or failure" Mar 30 12:59:06.985: INFO: Trying to get logs from node iruya-worker pod pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12 container test-container: STEP: delete the pod Mar 30 12:59:07.044: INFO: Waiting for pod pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12 to disappear Mar 30 12:59:07.046: INFO: Pod pod-a9cfa21d-aeb2-4752-a0ac-690d59d27c12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:59:07.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8342" for this suite. Mar 30 12:59:13.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:59:13.141: INFO: namespace emptydir-8342 deletion completed in 6.091717916s • [SLOW TEST:10.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:59:13.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 30 12:59:17.768: INFO: Successfully updated pod "labelsupdatefec3e0f4-17c3-4a46-8c96-24342932168f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:59:19.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2530" for this suite. Mar 30 12:59:41.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:59:41.931: INFO: namespace downward-api-2530 deletion completed in 22.120761814s • [SLOW TEST:28.790 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:59:41.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 30 12:59:46.027: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 30 12:59:51.122: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 12:59:51.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3389" for this suite. Mar 30 12:59:57.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 12:59:57.219: INFO: namespace pods-3389 deletion completed in 6.088809863s • [SLOW TEST:15.287 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 12:59:57.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 13:00:01.328: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:00:01.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6766" for this suite. Mar 30 13:00:07.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:00:07.430: INFO: namespace container-runtime-6766 deletion completed in 6.083037336s • [SLOW TEST:10.211 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:00:07.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 30 13:00:07.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7017' Mar 30 13:00:07.808: INFO: stderr: "" Mar 30 13:00:07.808: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 30 13:00:08.812: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:00:08.813: INFO: Found 0 / 1 Mar 30 13:00:09.848: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:00:09.848: INFO: Found 0 / 1 Mar 30 13:00:10.812: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:00:10.812: INFO: Found 0 / 1 Mar 30 13:00:11.811: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:00:11.811: INFO: Found 1 / 1 Mar 30 13:00:11.811: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 30 13:00:11.814: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:00:11.814: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 30 13:00:11.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017' Mar 30 13:00:11.919: INFO: stderr: "" Mar 30 13:00:11.919: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Mar 13:00:10.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Mar 13:00:10.224 # Server started, Redis version 3.2.12\n1:M 30 Mar 13:00:10.224 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Mar 13:00:10.224 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 30 13:00:11.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017 --tail=1' Mar 30 13:00:12.024: INFO: stderr: "" Mar 30 13:00:12.024: INFO: stdout: "1:M 30 Mar 13:00:10.224 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 30 13:00:12.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017 --limit-bytes=1' Mar 30 13:00:12.139: INFO: stderr: "" Mar 30 13:00:12.139: INFO: stdout: " " STEP: exposing timestamps Mar 30 13:00:12.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017 --tail=1 --timestamps' Mar 30 13:00:12.251: INFO: stderr: "" Mar 30 13:00:12.251: INFO: stdout: "2020-03-30T13:00:10.224648133Z 1:M 30 Mar 13:00:10.224 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 30 13:00:14.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017 --since=1s' Mar 30 13:00:14.871: INFO: stderr: "" Mar 30 13:00:14.871: INFO: stdout: "" Mar 30 13:00:14.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pzlgx redis-master --namespace=kubectl-7017 --since=24h' Mar 30 13:00:14.982: INFO: stderr: "" Mar 30 13:00:14.982: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Mar 13:00:10.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Mar 13:00:10.224 # Server started, Redis version 3.2.12\n1:M 30 Mar 13:00:10.224 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Mar 13:00:10.224 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 30 13:00:14.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7017' Mar 30 13:00:15.077: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 13:00:15.077: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 30 13:00:15.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7017' Mar 30 13:00:15.175: INFO: stderr: "No resources found.\n" Mar 30 13:00:15.175: INFO: stdout: "" Mar 30 13:00:15.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7017 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:00:15.257: INFO: stderr: "" Mar 30 13:00:15.257: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:00:15.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7017" for this suite. Mar 30 13:00:37.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:00:37.365: INFO: namespace kubectl-7017 deletion completed in 22.103788676s • [SLOW TEST:29.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:00:37.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 30 13:00:37.427: INFO: Waiting up to 5m0s for pod "var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8" in namespace "var-expansion-1289" to be "success or failure" Mar 30 13:00:37.431: INFO: Pod "var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680866ms Mar 30 13:00:39.435: INFO: Pod "var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007745869s Mar 30 13:00:41.439: INFO: Pod "var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012264316s STEP: Saw pod success Mar 30 13:00:41.440: INFO: Pod "var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8" satisfied condition "success or failure" Mar 30 13:00:41.443: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8 container dapi-container: STEP: delete the pod Mar 30 13:00:41.473: INFO: Waiting for pod var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8 to disappear Mar 30 13:00:41.485: INFO: Pod var-expansion-f424fdae-aec9-4249-b9da-ecdddfa86cc8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:00:41.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1289" for this suite. Mar 30 13:00:47.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:00:47.575: INFO: namespace var-expansion-1289 deletion completed in 6.086583564s • [SLOW TEST:10.210 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:00:47.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 30 13:00:47.653: INFO: Waiting up to 5m0s for pod "client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9" in namespace "containers-7728" to be "success or failure" Mar 30 13:00:47.671: INFO: Pod "client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.936991ms Mar 30 13:00:49.674: INFO: Pod "client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021192128s Mar 30 13:00:51.678: INFO: Pod "client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025516019s STEP: Saw pod success Mar 30 13:00:51.678: INFO: Pod "client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9" satisfied condition "success or failure" Mar 30 13:00:51.682: INFO: Trying to get logs from node iruya-worker pod client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9 container test-container: STEP: delete the pod Mar 30 13:00:51.700: INFO: Waiting for pod client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9 to disappear Mar 30 13:00:51.746: INFO: Pod client-containers-f5f23998-c903-4b9c-9bae-d2f2618312b9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:00:51.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7728" for this suite. Mar 30 13:00:57.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:00:57.833: INFO: namespace containers-7728 deletion completed in 6.083232434s • [SLOW TEST:10.258 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:00:57.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9693 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9693 to expose endpoints map[] Mar 30 13:00:57.950: INFO: successfully validated that service endpoint-test2 in namespace services-9693 exposes endpoints map[] (21.54711ms elapsed) STEP: Creating pod pod1 in namespace services-9693 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9693 to expose endpoints map[pod1:[80]] Mar 30 13:01:01.019: INFO: successfully validated that service endpoint-test2 in namespace services-9693 exposes endpoints map[pod1:[80]] (3.060175764s elapsed) STEP: Creating pod pod2 in namespace services-9693 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9693 to expose endpoints map[pod1:[80] pod2:[80]] Mar 30 13:01:04.156: INFO: successfully validated that service endpoint-test2 in namespace services-9693 exposes endpoints map[pod1:[80] pod2:[80]] (3.133078003s elapsed) STEP: Deleting pod pod1 in namespace services-9693 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9693 to expose endpoints map[pod2:[80]] Mar 30 13:01:04.215: INFO: successfully validated that service endpoint-test2 in namespace services-9693 exposes endpoints map[pod2:[80]] (53.704551ms elapsed) STEP: Deleting pod pod2 in namespace services-9693 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9693 to expose endpoints map[] Mar 30 13:01:05.226: INFO: successfully validated that service endpoint-test2 in namespace services-9693 exposes endpoints map[] (1.006787334s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:01:05.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9693" for this suite. Mar 30 13:01:27.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:01:27.365: INFO: namespace services-9693 deletion completed in 22.087948206s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:29.532 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:01:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 30 13:01:35.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 13:01:35.526: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 13:01:37.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 13:01:37.530: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 13:01:39.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 13:01:39.530: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 13:01:41.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 13:01:41.529: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 13:01:43.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 13:01:43.530: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:01:43.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9748" for this suite. Mar 30 13:02:05.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:02:05.628: INFO: namespace container-lifecycle-hook-9748 deletion completed in 22.087048181s • [SLOW TEST:38.263 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:02:05.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:02:10.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3825" for this suite. Mar 30 13:02:32.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:02:32.857: INFO: namespace replication-controller-3825 deletion completed in 22.094334788s • [SLOW TEST:27.228 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:02:32.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:02:32.952: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 30 13:02:32.959: INFO: Number of nodes with available pods: 0 Mar 30 13:02:32.959: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 30 13:02:33.047: INFO: Number of nodes with available pods: 0 Mar 30 13:02:33.047: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:34.051: INFO: Number of nodes with available pods: 0 Mar 30 13:02:34.051: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:35.051: INFO: Number of nodes with available pods: 0 Mar 30 13:02:35.051: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:36.051: INFO: Number of nodes with available pods: 0 Mar 30 13:02:36.051: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:37.051: INFO: Number of nodes with available pods: 1 Mar 30 13:02:37.051: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 30 13:02:37.078: INFO: Number of nodes with available pods: 1 Mar 30 13:02:37.078: INFO: Number of running nodes: 0, number of available pods: 1 Mar 30 13:02:38.107: INFO: Number of nodes with available pods: 0 Mar 30 13:02:38.107: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 30 13:02:38.122: INFO: Number of nodes with available pods: 0 Mar 30 13:02:38.122: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:39.127: INFO: Number of nodes with available pods: 0 Mar 30 13:02:39.127: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:40.126: INFO: Number of nodes with available pods: 0 Mar 30 13:02:40.126: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:41.127: INFO: Number of nodes with available pods: 0 Mar 30 13:02:41.127: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:42.127: INFO: Number of nodes with available pods: 0 Mar 30 13:02:42.127: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:43.127: INFO: Number of nodes with available pods: 0 Mar 30 13:02:43.127: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:02:44.126: INFO: Number of nodes with available pods: 1 Mar 30 13:02:44.127: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3405, will wait for the garbage collector to delete the pods Mar 30 13:02:44.192: INFO: Deleting DaemonSet.extensions daemon-set took: 6.753787ms Mar 30 13:02:44.493: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.39957ms Mar 30 13:02:52.202: INFO: Number of nodes with available pods: 0 Mar 30 13:02:52.202: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 13:02:52.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3405/daemonsets","resourceVersion":"2671214"},"items":null} Mar 30 13:02:52.209: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3405/pods","resourceVersion":"2671214"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:02:52.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3405" for this suite. Mar 30 13:02:58.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:02:58.332: INFO: namespace daemonsets-3405 deletion completed in 6.090361891s • [SLOW TEST:25.474 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:02:58.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-46660570-a493-4f68-a405-77c39d1e11f8 STEP: Creating a pod to test consume configMaps Mar 30 13:02:58.403: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835" in namespace "projected-722" to be "success or failure" Mar 30 13:02:58.407: INFO: Pod "pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139548ms Mar 30 13:03:00.411: INFO: Pod "pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008448096s Mar 30 13:03:02.416: INFO: Pod "pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012826117s STEP: Saw pod success Mar 30 13:03:02.416: INFO: Pod "pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835" satisfied condition "success or failure" Mar 30 13:03:02.419: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835 container projected-configmap-volume-test: STEP: delete the pod Mar 30 13:03:02.462: INFO: Waiting for pod pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835 to disappear Mar 30 13:03:02.467: INFO: Pod pod-projected-configmaps-83dac5aa-744d-4eb6-b619-13120b5ef835 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:02.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-722" for this suite. Mar 30 13:03:08.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:08.565: INFO: namespace projected-722 deletion completed in 6.094241856s • [SLOW TEST:10.233 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:08.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 30 13:03:08.620: INFO: Waiting up to 5m0s for pod "pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475" in namespace "emptydir-112" to be "success or failure" Mar 30 13:03:08.634: INFO: Pod "pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475": Phase="Pending", Reason="", readiness=false. Elapsed: 14.4413ms Mar 30 13:03:10.638: INFO: Pod "pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018384454s Mar 30 13:03:12.643: INFO: Pod "pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022856521s STEP: Saw pod success Mar 30 13:03:12.643: INFO: Pod "pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475" satisfied condition "success or failure" Mar 30 13:03:12.646: INFO: Trying to get logs from node iruya-worker2 pod pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475 container test-container: STEP: delete the pod Mar 30 13:03:12.662: INFO: Waiting for pod pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475 to disappear Mar 30 13:03:12.667: INFO: Pod pod-c9620fbb-9ca4-4772-9e9f-d93e90db4475 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:12.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-112" for this suite. Mar 30 13:03:18.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:18.767: INFO: namespace emptydir-112 deletion completed in 6.09678074s • [SLOW TEST:10.202 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:18.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 30 13:03:18.840: INFO: Waiting up to 5m0s for pod "pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c" in namespace "emptydir-5230" to be "success or failure" Mar 30 13:03:18.847: INFO: Pod "pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955696ms Mar 30 13:03:20.851: INFO: Pod "pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011293168s Mar 30 13:03:22.856: INFO: Pod "pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016102679s STEP: Saw pod success Mar 30 13:03:22.856: INFO: Pod "pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c" satisfied condition "success or failure" Mar 30 13:03:22.858: INFO: Trying to get logs from node iruya-worker pod pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c container test-container: STEP: delete the pod Mar 30 13:03:22.884: INFO: Waiting for pod pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c to disappear Mar 30 13:03:22.895: INFO: Pod pod-b4aea985-e3e2-42aa-88c7-d73ea5ac246c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:22.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5230" for this suite. Mar 30 13:03:28.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:28.980: INFO: namespace emptydir-5230 deletion completed in 6.081783933s • [SLOW TEST:10.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:28.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6832/configmap-test-f6f13eb5-4366-4954-8961-43dba8d5dda9 STEP: Creating a pod to test consume configMaps Mar 30 13:03:29.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392" in namespace "configmap-6832" to be "success or failure" Mar 30 13:03:29.101: INFO: Pod "pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392": Phase="Pending", Reason="", readiness=false. Elapsed: 24.087396ms Mar 30 13:03:31.138: INFO: Pod "pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061167805s Mar 30 13:03:33.141: INFO: Pod "pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064801923s STEP: Saw pod success Mar 30 13:03:33.141: INFO: Pod "pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392" satisfied condition "success or failure" Mar 30 13:03:33.144: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392 container env-test: STEP: delete the pod Mar 30 13:03:33.176: INFO: Waiting for pod pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392 to disappear Mar 30 13:03:33.199: INFO: Pod pod-configmaps-b59679be-9de4-49f3-ae81-9023a318b392 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:33.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6832" for this suite. Mar 30 13:03:39.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:39.314: INFO: namespace configmap-6832 deletion completed in 6.102980912s • [SLOW TEST:10.334 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:39.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 30 13:03:39.399: INFO: Waiting up to 5m0s for pod "pod-00e35638-0e89-47f3-b51a-8f68f402adf8" in namespace "emptydir-8848" to be "success or failure" Mar 30 13:03:39.402: INFO: Pod "pod-00e35638-0e89-47f3-b51a-8f68f402adf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.675185ms Mar 30 13:03:41.406: INFO: Pod "pod-00e35638-0e89-47f3-b51a-8f68f402adf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007901641s Mar 30 13:03:43.411: INFO: Pod "pod-00e35638-0e89-47f3-b51a-8f68f402adf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012428804s STEP: Saw pod success Mar 30 13:03:43.411: INFO: Pod "pod-00e35638-0e89-47f3-b51a-8f68f402adf8" satisfied condition "success or failure" Mar 30 13:03:43.415: INFO: Trying to get logs from node iruya-worker pod pod-00e35638-0e89-47f3-b51a-8f68f402adf8 container test-container: STEP: delete the pod Mar 30 13:03:43.459: INFO: Waiting for pod pod-00e35638-0e89-47f3-b51a-8f68f402adf8 to disappear Mar 30 13:03:43.462: INFO: Pod pod-00e35638-0e89-47f3-b51a-8f68f402adf8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:43.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8848" for this suite. Mar 30 13:03:49.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:49.601: INFO: namespace emptydir-8848 deletion completed in 6.135533172s • [SLOW TEST:10.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:49.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 30 13:03:49.675: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6075,SelfLink:/api/v1/namespaces/watch-6075/configmaps/e2e-watch-test-watch-closed,UID:fb809bea-0435-4d39-a810-63d42d984f45,ResourceVersion:2671466,Generation:0,CreationTimestamp:2020-03-30 13:03:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 13:03:49.675: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6075,SelfLink:/api/v1/namespaces/watch-6075/configmaps/e2e-watch-test-watch-closed,UID:fb809bea-0435-4d39-a810-63d42d984f45,ResourceVersion:2671467,Generation:0,CreationTimestamp:2020-03-30 13:03:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 30 13:03:49.687: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6075,SelfLink:/api/v1/namespaces/watch-6075/configmaps/e2e-watch-test-watch-closed,UID:fb809bea-0435-4d39-a810-63d42d984f45,ResourceVersion:2671468,Generation:0,CreationTimestamp:2020-03-30 13:03:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 13:03:49.687: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6075,SelfLink:/api/v1/namespaces/watch-6075/configmaps/e2e-watch-test-watch-closed,UID:fb809bea-0435-4d39-a810-63d42d984f45,ResourceVersion:2671469,Generation:0,CreationTimestamp:2020-03-30 13:03:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:03:49.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6075" for this suite. Mar 30 13:03:55.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:03:55.818: INFO: namespace watch-6075 deletion completed in 6.115446417s • [SLOW TEST:6.215 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:03:55.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:03:55.885: INFO: Create a RollingUpdate DaemonSet Mar 30 13:03:55.889: INFO: Check that daemon pods launch on every node of the cluster Mar 30 13:03:55.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:03:55.895: INFO: Number of nodes with available pods: 0 Mar 30 13:03:55.895: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:03:56.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:03:56.905: INFO: Number of nodes with available pods: 0 Mar 30 13:03:56.905: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:03:57.901: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:03:57.904: INFO: Number of nodes with available pods: 0 Mar 30 13:03:57.904: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:03:58.900: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:03:58.914: INFO: Number of nodes with available pods: 0 Mar 30 13:03:58.914: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:03:59.901: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:03:59.905: INFO: Number of nodes with available pods: 2 Mar 30 13:03:59.905: INFO: Number of running nodes: 2, number of available pods: 2 Mar 30 13:03:59.905: INFO: Update the DaemonSet to trigger a rollout Mar 30 13:03:59.911: INFO: Updating DaemonSet daemon-set Mar 30 13:04:11.955: INFO: Roll back the DaemonSet before rollout is complete Mar 30 13:04:11.960: INFO: Updating DaemonSet daemon-set Mar 30 13:04:11.960: INFO: Make sure DaemonSet rollback is complete Mar 30 13:04:11.966: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:11.966: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:11.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:13.000: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:13.000: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:13.004: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:13.977: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:13.977: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:13.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:14.978: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:14.978: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:14.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:15.977: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:15.977: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:15.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:16.977: INFO: Wrong image for pod: daemon-set-cq25j. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 30 13:04:16.977: INFO: Pod daemon-set-cq25j is not available Mar 30 13:04:16.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:04:17.977: INFO: Pod daemon-set-7xsj5 is not available Mar 30 13:04:17.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9809, will wait for the garbage collector to delete the pods Mar 30 13:04:18.106: INFO: Deleting DaemonSet.extensions daemon-set took: 64.39979ms Mar 30 13:04:18.406: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265652ms Mar 30 13:04:21.210: INFO: Number of nodes with available pods: 0 Mar 30 13:04:21.210: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 13:04:21.213: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9809/daemonsets","resourceVersion":"2671610"},"items":null} Mar 30 13:04:21.215: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9809/pods","resourceVersion":"2671610"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:04:21.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9809" for this suite. Mar 30 13:04:27.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:04:27.321: INFO: namespace daemonsets-9809 deletion completed in 6.09200913s • [SLOW TEST:31.503 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:04:27.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:04:27.370: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68" in namespace "downward-api-5040" to be "success or failure" Mar 30 13:04:27.415: INFO: Pod "downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68": Phase="Pending", Reason="", readiness=false. Elapsed: 44.253509ms Mar 30 13:04:29.418: INFO: Pod "downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047560289s Mar 30 13:04:31.422: INFO: Pod "downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051865089s STEP: Saw pod success Mar 30 13:04:31.422: INFO: Pod "downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68" satisfied condition "success or failure" Mar 30 13:04:31.426: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68 container client-container: STEP: delete the pod Mar 30 13:04:31.463: INFO: Waiting for pod downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68 to disappear Mar 30 13:04:31.486: INFO: Pod downwardapi-volume-1f4f6d1b-901e-4f76-bb57-8a5347f38d68 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:04:31.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5040" for this suite. Mar 30 13:04:37.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:04:37.604: INFO: namespace downward-api-5040 deletion completed in 6.114144028s • [SLOW TEST:10.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:04:37.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-sgr7 STEP: Creating a pod to test atomic-volume-subpath Mar 30 13:04:37.689: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sgr7" in namespace "subpath-9474" to be "success or failure" Mar 30 13:04:37.693: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321646ms Mar 30 13:04:39.697: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008747156s Mar 30 13:04:41.702: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 4.013296276s Mar 30 13:04:43.707: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 6.017941526s Mar 30 13:04:45.711: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 8.022034523s Mar 30 13:04:47.715: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 10.026105732s Mar 30 13:04:49.719: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 12.030394518s Mar 30 13:04:51.723: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 14.034779421s Mar 30 13:04:53.728: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 16.038974004s Mar 30 13:04:55.732: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 18.043063142s Mar 30 13:04:57.736: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 20.047147553s Mar 30 13:04:59.740: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Running", Reason="", readiness=true. Elapsed: 22.051654023s Mar 30 13:05:01.745: INFO: Pod "pod-subpath-test-configmap-sgr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056254293s STEP: Saw pod success Mar 30 13:05:01.745: INFO: Pod "pod-subpath-test-configmap-sgr7" satisfied condition "success or failure" Mar 30 13:05:01.748: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-sgr7 container test-container-subpath-configmap-sgr7: STEP: delete the pod Mar 30 13:05:01.811: INFO: Waiting for pod pod-subpath-test-configmap-sgr7 to disappear Mar 30 13:05:01.823: INFO: Pod pod-subpath-test-configmap-sgr7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-sgr7 Mar 30 13:05:01.823: INFO: Deleting pod "pod-subpath-test-configmap-sgr7" in namespace "subpath-9474" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:05:01.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9474" for this suite. Mar 30 13:05:07.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:05:07.915: INFO: namespace subpath-9474 deletion completed in 6.086964019s • [SLOW TEST:30.311 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:05:07.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 30 13:05:07.994: INFO: Waiting up to 5m0s for pod "pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd" in namespace "emptydir-4228" to be "success or failure" Mar 30 13:05:08.012: INFO: Pod "pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.322009ms Mar 30 13:05:10.015: INFO: Pod "pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02160292s Mar 30 13:05:12.019: INFO: Pod "pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025311878s STEP: Saw pod success Mar 30 13:05:12.019: INFO: Pod "pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd" satisfied condition "success or failure" Mar 30 13:05:12.022: INFO: Trying to get logs from node iruya-worker2 pod pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd container test-container: STEP: delete the pod Mar 30 13:05:12.035: INFO: Waiting for pod pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd to disappear Mar 30 13:05:12.040: INFO: Pod pod-b85dbaeb-5125-475d-99b8-dba7b6f1a5cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:05:12.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4228" for this suite. Mar 30 13:05:18.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:05:18.146: INFO: namespace emptydir-4228 deletion completed in 6.102795839s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:05:18.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:05:18.207: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:05:22.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1014" for this suite. Mar 30 13:06:02.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:06:02.446: INFO: namespace pods-1014 deletion completed in 40.189841659s • [SLOW TEST:44.301 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:06:02.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:06:02.506: INFO: Creating deployment "nginx-deployment" Mar 30 13:06:02.514: INFO: Waiting for observed generation 1 Mar 30 13:06:04.539: INFO: Waiting for all required pods to come up Mar 30 13:06:04.544: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 30 13:06:12.571: INFO: Waiting for deployment "nginx-deployment" to complete Mar 30 13:06:12.579: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 30 13:06:12.586: INFO: Updating deployment nginx-deployment Mar 30 13:06:12.586: INFO: Waiting for observed generation 2 Mar 30 13:06:14.595: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 30 13:06:14.598: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 30 13:06:14.600: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 30 13:06:14.607: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 30 13:06:14.607: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 30 13:06:14.609: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 30 13:06:14.613: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 30 13:06:14.613: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 30 13:06:14.619: INFO: Updating deployment nginx-deployment Mar 30 13:06:14.619: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 30 13:06:14.665: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 30 13:06:14.694: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 30 13:06:14.891: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-118,SelfLink:/apis/apps/v1/namespaces/deployment-118/deployments/nginx-deployment,UID:887fa2e9-981b-4b44-b7ec-838f70f468b7,ResourceVersion:2672157,Generation:3,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-30 13:06:12 +0000 UTC 2020-03-30 13:06:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-30 13:06:14 +0000 UTC 2020-03-30 13:06:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 30 13:06:14.958: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-118,SelfLink:/apis/apps/v1/namespaces/deployment-118/replicasets/nginx-deployment-55fb7cb77f,UID:8e1b196e-cbe0-40cf-af7f-0f396586fd96,ResourceVersion:2672189,Generation:3,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 887fa2e9-981b-4b44-b7ec-838f70f468b7 0xc002eb5ff7 0xc002eb5ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:06:14.958: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 30 13:06:14.958: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-118,SelfLink:/apis/apps/v1/namespaces/deployment-118/replicasets/nginx-deployment-7b8c6f4498,UID:87c369d8-0288-428d-8177-585684d29403,ResourceVersion:2672178,Generation:3,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 887fa2e9-981b-4b44-b7ec-838f70f468b7 0xc002af8157 0xc002af8158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-245cx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-245cx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-245cx,UID:655624f4-87d1-46e1-9a65-a90d9c5a8515,ResourceVersion:2672186,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426917 0xc002426918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024269b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-6c9rq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6c9rq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-6c9rq,UID:e0d1aea4-2fff-4879-a53f-6dac90ce8830,ResourceVersion:2672159,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426a37 0xc002426a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002426ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-87lm9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-87lm9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-87lm9,UID:8f0538ed-6eeb-4569-a444-28049e6efc2d,ResourceVersion:2672171,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426b67 0xc002426b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002426c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-9r56m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9r56m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-9r56m,UID:a7f0ca77-2fa1-46d3-ba91-5e15aad7b967,ResourceVersion:2672183,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426cb7 0xc002426cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002426d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-9xblc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9xblc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-9xblc,UID:79fb8baa-aeff-4669-9a62-1a35d8368b9f,ResourceVersion:2672177,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426df7 0xc002426df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002426e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.024: INFO: Pod "nginx-deployment-55fb7cb77f-f462n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f462n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-f462n,UID:6636b1ac-82de-4cc0-9d17-0fab67cd3012,ResourceVersion:2672128,Generation:0,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002426f17 0xc002426f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002426fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002426fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-30 13:06:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-gdx5n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gdx5n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-gdx5n,UID:6bf98447-dcb2-4f81-bd12-dddab8797d33,ResourceVersion:2672121,Generation:0,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc0024270f0 0xc0024270f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-30 13:06:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-hhnqx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hhnqx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-hhnqx,UID:ad0838cd-653d-4e3a-a81e-8500b6faa1dd,ResourceVersion:2672127,Generation:0,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002427260 0xc002427261}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024272e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-30 13:06:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-tgrzt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tgrzt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-tgrzt,UID:fcf20ad5-5dce-444f-9df2-722cc80a8bc9,ResourceVersion:2672180,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc0024273d0 0xc0024273d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-tzp6h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tzp6h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-tzp6h,UID:62e71595-f3d5-4f9a-8a5c-71b6fa936fb5,ResourceVersion:2672110,Generation:0,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc0024274f7 0xc0024274f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427570} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-30 13:06:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-w26lb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w26lb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-w26lb,UID:0e298d79-0bd8-4b7e-8263-2c7670996d85,ResourceVersion:2672106,Generation:0,CreationTimestamp:2020-03-30 13:06:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002427660 0xc002427661}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024276e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-30 13:06:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-w2q79" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w2q79,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-w2q79,UID:998088e2-9054-4d52-8b55-3d2ef6bbcd2f,ResourceVersion:2672197,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc0024277d0 0xc0024277d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427850} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-30 13:06:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.025: INFO: Pod "nginx-deployment-55fb7cb77f-wchtk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wchtk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-55fb7cb77f-wchtk,UID:54f0f391-7636-4fff-9364-60d482d95e11,ResourceVersion:2672190,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e1b196e-cbe0-40cf-af7f-0f396586fd96 0xc002427940 0xc002427941}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024279e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-2fg5w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2fg5w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-2fg5w,UID:8af86951-3f45-49a9-bbd0-429d5206f9c7,ResourceVersion:2672174,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002427a87 0xc002427a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-45vn8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-45vn8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-45vn8,UID:676d9404-74cb-4442-988c-700c8e559d03,ResourceVersion:2672015,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002427ba7 0xc002427ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.57,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://94fea574ec5cb62a0e3ebdecf1107b1632d7944b22378d34f630286e72f9eea0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-6mmgj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6mmgj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-6mmgj,UID:d29b3211-9fe2-469f-b10d-150618f2e93d,ResourceVersion:2672050,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002427d17 0xc002427d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.60,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://39884e79c6ddb616c503f1aada9e0c87e216a1d66ac4cbe9ed83b567b72e5625}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-7cq92" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7cq92,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-7cq92,UID:428749c9-bf03-4f4c-99a3-87a6099a35b0,ResourceVersion:2672028,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002427e87 0xc002427e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002427f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002427f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.24,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bf8893aed0bffa2b14159a448f2af50849ea954c22ba1d7ce4c27933be18841a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-9p4cl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9p4cl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-9p4cl,UID:46e38fba-2371-4e8c-a0c7-5dfca8a5ec7f,ResourceVersion:2672191,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002427ff7 0xc002427ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c580b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-30 13:06:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.026: INFO: Pod "nginx-deployment-7b8c6f4498-crfnv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-crfnv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-crfnv,UID:dfb4eda8-7afb-49d5-b7ce-e5c12d8ea0a7,ResourceVersion:2672193,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c58367 0xc002c58368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c58430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-30 13:06:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-czm6d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-czm6d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-czm6d,UID:5808174b-af9b-4f4d-b68d-6eea1aaebb1c,ResourceVersion:2672170,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c585f7 0xc002c585f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c58670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-gssj9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gssj9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-gssj9,UID:604eae87-d629-47c0-b3aa-4768318dc1ec,ResourceVersion:2672184,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c58797 0xc002c58798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c58810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-hmtjn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hmtjn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-hmtjn,UID:f9119de8-dac7-4aa6-be2e-b1b8088ca37a,ResourceVersion:2672165,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c588b7 0xc002c588b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c58aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-jdlfk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jdlfk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-jdlfk,UID:19ab7450-6ffe-4731-8f8d-95eccf5c097d,ResourceVersion:2672182,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c58c57 0xc002c58c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c58dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c58e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-kqxjp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kqxjp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-kqxjp,UID:0fee3d56-7d06-4779-9a6f-5a8b92df5a95,ResourceVersion:2672181,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c59027 0xc002c59028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c590a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-kwn6f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kwn6f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-kwn6f,UID:948e6563-9e2c-4358-9b29-0cbdb99a2793,ResourceVersion:2672044,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c59277 0xc002c59278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c59360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.59,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9caf44098e9ccf61e39fc13e908917c1223ae458808d070635e9457df70df365}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-l4jt8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l4jt8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-l4jt8,UID:36ebeab5-f5f5-45e1-b29b-858838ddd399,ResourceVersion:2672069,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c59577 0xc002c59578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c596e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.28,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cc02c3baff32e35ca23239408c56874887340037e535becdc8bc5086e891daf5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.027: INFO: Pod "nginx-deployment-7b8c6f4498-ljtzd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ljtzd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-ljtzd,UID:596dac6b-e04a-4b79-8f2e-19c63f0ebc67,ResourceVersion:2672185,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c598a7 0xc002c598a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c59990} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-sllqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sllqd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-sllqd,UID:89bd4b44-b5b7-4ce3-9ae4-9a8038bc3034,ResourceVersion:2672173,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c59b17 0xc002c59b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c59c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-vr768" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vr768,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-vr768,UID:7cf8d679-609f-4e7f-9411-04f18119a182,ResourceVersion:2672072,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc002c59df7 0xc002c59df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c59ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c59f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.26,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cdce4234152c28fe6494cf260fbbc91b71c61bb32eb87108eed9c5a37c8d4f50}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-zcqsb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zcqsb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-zcqsb,UID:e55da027-5a49-4c5d-b011-6536cb384c54,ResourceVersion:2672064,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc00210a037 0xc00210a038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00210a0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00210a0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.27,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://43bd11ac9867dc3a1bd8a5f60a08c79397093a31d221cbc51b9389239a2cfa2e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-zfk8s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zfk8s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-zfk8s,UID:37b3c571-c58b-4262-8000-11e690c3013d,ResourceVersion:2672047,Generation:0,CreationTimestamp:2020-03-30 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc00210a1e7 0xc00210a1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00210a2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00210a380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.58,StartTime:2020-03-30 13:06:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 13:06:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://68fcd831f6132ef9dd570dedd7e2068cc380ec42bad77785495f9c579ff92303}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-zlbcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zlbcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-zlbcp,UID:5694f02c-2dba-46c4-bf03-4a66e10a0d10,ResourceVersion:2672154,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc00210a6c7 0xc00210a6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00210a860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00210a880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 13:06:15.028: INFO: Pod "nginx-deployment-7b8c6f4498-zlrvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zlrvw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-118,SelfLink:/api/v1/namespaces/deployment-118/pods/nginx-deployment-7b8c6f4498-zlrvw,UID:adbbba36-1988-49e9-9954-2e6640e60f7e,ResourceVersion:2672187,Generation:0,CreationTimestamp:2020-03-30 13:06:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 87c369d8-0288-428d-8177-585684d29403 0xc00210a917 0xc00210a918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lb8bk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lb8bk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lb8bk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00210a990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00210a9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:06:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:06:15.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-118" for this suite. Mar 30 13:06:39.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:06:39.374: INFO: namespace deployment-118 deletion completed in 24.23342421s • [SLOW TEST:36.927 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:06:39.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 30 13:06:39.439: INFO: Waiting up to 5m0s for pod "client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632" in namespace "containers-164" to be "success or failure" Mar 30 13:06:39.450: INFO: Pod "client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632": Phase="Pending", Reason="", readiness=false. Elapsed: 10.434563ms Mar 30 13:06:41.453: INFO: Pod "client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014140908s Mar 30 13:06:43.458: INFO: Pod "client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018706734s STEP: Saw pod success Mar 30 13:06:43.458: INFO: Pod "client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632" satisfied condition "success or failure" Mar 30 13:06:43.461: INFO: Trying to get logs from node iruya-worker pod client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632 container test-container: STEP: delete the pod Mar 30 13:06:43.481: INFO: Waiting for pod client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632 to disappear Mar 30 13:06:43.486: INFO: Pod client-containers-3fa59793-fcfc-4ad7-b933-6b950e537632 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:06:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-164" for this suite. Mar 30 13:06:49.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:06:49.627: INFO: namespace containers-164 deletion completed in 6.137481589s • [SLOW TEST:10.253 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:06:49.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 30 13:06:49.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-132' Mar 30 13:06:52.205: INFO: stderr: "" Mar 30 13:06:52.205: INFO: stdout: "pod/pause created\n" Mar 30 13:06:52.205: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 30 13:06:52.206: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-132" to be "running and ready" Mar 30 13:06:52.211: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.01224ms Mar 30 13:06:54.215: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00949996s Mar 30 13:06:56.219: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013825667s Mar 30 13:06:56.219: INFO: Pod "pause" satisfied condition "running and ready" Mar 30 13:06:56.219: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 30 13:06:56.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-132' Mar 30 13:06:56.321: INFO: stderr: "" Mar 30 13:06:56.321: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 30 13:06:56.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-132' Mar 30 13:06:56.421: INFO: stderr: "" Mar 30 13:06:56.421: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 30 13:06:56.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-132' Mar 30 13:06:56.519: INFO: stderr: "" Mar 30 13:06:56.519: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 30 13:06:56.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-132' Mar 30 13:06:56.613: INFO: stderr: "" Mar 30 13:06:56.613: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 30 13:06:56.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-132' Mar 30 13:06:56.711: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 13:06:56.711: INFO: stdout: "pod \"pause\" force deleted\n" Mar 30 13:06:56.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-132' Mar 30 13:06:56.798: INFO: stderr: "No resources found.\n" Mar 30 13:06:56.798: INFO: stdout: "" Mar 30 13:06:56.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-132 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:06:56.948: INFO: stderr: "" Mar 30 13:06:56.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:06:56.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-132" for this suite. Mar 30 13:07:02.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:07:03.088: INFO: namespace kubectl-132 deletion completed in 6.134952179s • [SLOW TEST:13.461 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:07:03.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 30 13:07:03.665: INFO: created pod pod-service-account-defaultsa Mar 30 13:07:03.665: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 30 13:07:03.672: INFO: created pod pod-service-account-mountsa Mar 30 13:07:03.672: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 30 13:07:03.678: INFO: created pod pod-service-account-nomountsa Mar 30 13:07:03.678: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 30 13:07:03.707: INFO: created pod pod-service-account-defaultsa-mountspec Mar 30 13:07:03.707: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 30 13:07:03.720: INFO: created pod pod-service-account-mountsa-mountspec Mar 30 13:07:03.720: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 30 13:07:03.764: INFO: created pod pod-service-account-nomountsa-mountspec Mar 30 13:07:03.764: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 30 13:07:03.792: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 30 13:07:03.792: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 30 13:07:03.834: INFO: created pod pod-service-account-mountsa-nomountspec Mar 30 13:07:03.834: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 30 13:07:03.841: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 30 13:07:03.841: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:07:03.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4785" for this suite. Mar 30 13:07:29.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:07:30.041: INFO: namespace svcaccounts-4785 deletion completed in 26.126835126s • [SLOW TEST:26.952 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:07:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:07:30.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0" in namespace "downward-api-7262" to be "success or failure" Mar 30 13:07:30.110: INFO: Pod "downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.026222ms Mar 30 13:07:32.140: INFO: Pod "downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03873249s Mar 30 13:07:34.143: INFO: Pod "downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041872467s STEP: Saw pod success Mar 30 13:07:34.143: INFO: Pod "downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0" satisfied condition "success or failure" Mar 30 13:07:34.145: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0 container client-container: STEP: delete the pod Mar 30 13:07:34.220: INFO: Waiting for pod downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0 to disappear Mar 30 13:07:34.241: INFO: Pod downwardapi-volume-8bde7ea6-f65a-4eb1-86bf-47f182abecc0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:07:34.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7262" for this suite. Mar 30 13:07:40.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:07:40.344: INFO: namespace downward-api-7262 deletion completed in 6.098393493s • [SLOW TEST:10.303 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:07:40.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 30 13:07:44.956: INFO: Successfully updated pod "annotationupdatec20dba42-58b1-4e28-abb4-fdc6c93cad7c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:07:46.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-359" for this suite. Mar 30 13:08:08.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:08:09.066: INFO: namespace projected-359 deletion completed in 22.090138036s • [SLOW TEST:28.722 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:08:09.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:08:09.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047" in namespace "projected-8205" to be "success or failure" Mar 30 13:08:09.129: INFO: Pod "downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047": Phase="Pending", Reason="", readiness=false. Elapsed: 12.713221ms Mar 30 13:08:11.133: INFO: Pod "downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017539933s Mar 30 13:08:13.138: INFO: Pod "downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021989278s STEP: Saw pod success Mar 30 13:08:13.138: INFO: Pod "downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047" satisfied condition "success or failure" Mar 30 13:08:13.141: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047 container client-container: STEP: delete the pod Mar 30 13:08:13.173: INFO: Waiting for pod downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047 to disappear Mar 30 13:08:13.182: INFO: Pod downwardapi-volume-e417b047-dbd3-4b2e-8f72-165654640047 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:08:13.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8205" for this suite. Mar 30 13:08:19.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:08:19.282: INFO: namespace projected-8205 deletion completed in 6.095717349s • [SLOW TEST:10.213 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:08:19.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7467 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 13:08:19.334: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 13:08:39.415: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostName&protocol=udp&host=10.244.2.49&port=8081&tries=1'] Namespace:pod-network-test-7467 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:08:39.415: INFO: >>> kubeConfig: /root/.kube/config I0330 13:08:39.441029 6 log.go:172] (0xc001a8f810) (0xc001d97f40) Create stream I0330 13:08:39.441072 6 log.go:172] (0xc001a8f810) (0xc001d97f40) Stream added, broadcasting: 1 I0330 13:08:39.443351 6 log.go:172] (0xc001a8f810) Reply frame received for 1 I0330 13:08:39.443399 6 log.go:172] (0xc001a8f810) (0xc00061cb40) Create stream I0330 13:08:39.443414 6 log.go:172] (0xc001a8f810) (0xc00061cb40) Stream added, broadcasting: 3 I0330 13:08:39.444432 6 log.go:172] (0xc001a8f810) Reply frame received for 3 I0330 13:08:39.444464 6 log.go:172] (0xc001a8f810) (0xc00061cbe0) Create stream I0330 13:08:39.444477 6 log.go:172] (0xc001a8f810) (0xc00061cbe0) Stream added, broadcasting: 5 I0330 13:08:39.445611 6 log.go:172] (0xc001a8f810) Reply frame received for 5 I0330 13:08:39.538897 6 log.go:172] (0xc001a8f810) Data frame received for 3 I0330 13:08:39.538943 6 log.go:172] (0xc00061cb40) (3) Data frame handling I0330 13:08:39.538964 6 log.go:172] (0xc00061cb40) (3) Data frame sent I0330 13:08:39.539411 6 log.go:172] (0xc001a8f810) Data frame received for 5 I0330 13:08:39.539442 6 log.go:172] (0xc00061cbe0) (5) Data frame handling I0330 13:08:39.539670 6 log.go:172] (0xc001a8f810) Data frame received for 3 I0330 13:08:39.539743 6 log.go:172] (0xc00061cb40) (3) Data frame handling I0330 13:08:39.542186 6 log.go:172] (0xc001a8f810) Data frame received for 1 I0330 13:08:39.542227 6 log.go:172] (0xc001d97f40) (1) Data frame handling I0330 13:08:39.542265 6 log.go:172] (0xc001d97f40) (1) Data frame sent I0330 13:08:39.542289 6 log.go:172] (0xc001a8f810) (0xc001d97f40) Stream removed, broadcasting: 1 I0330 13:08:39.542326 6 log.go:172] (0xc001a8f810) Go away received I0330 13:08:39.542497 6 log.go:172] (0xc001a8f810) (0xc001d97f40) Stream removed, broadcasting: 1 I0330 13:08:39.542532 6 log.go:172] (0xc001a8f810) (0xc00061cb40) Stream removed, broadcasting: 3 I0330 13:08:39.542560 6 log.go:172] (0xc001a8f810) (0xc00061cbe0) Stream removed, broadcasting: 5 Mar 30 13:08:39.542: INFO: Waiting for endpoints: map[] Mar 30 13:08:39.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostName&protocol=udp&host=10.244.1.81&port=8081&tries=1'] Namespace:pod-network-test-7467 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:08:39.546: INFO: >>> kubeConfig: /root/.kube/config I0330 13:08:39.577208 6 log.go:172] (0xc000eeab00) (0xc002090aa0) Create stream I0330 13:08:39.577237 6 log.go:172] (0xc000eeab00) (0xc002090aa0) Stream added, broadcasting: 1 I0330 13:08:39.578845 6 log.go:172] (0xc000eeab00) Reply frame received for 1 I0330 13:08:39.578876 6 log.go:172] (0xc000eeab00) (0xc002090b40) Create stream I0330 13:08:39.578885 6 log.go:172] (0xc000eeab00) (0xc002090b40) Stream added, broadcasting: 3 I0330 13:08:39.579768 6 log.go:172] (0xc000eeab00) Reply frame received for 3 I0330 13:08:39.579802 6 log.go:172] (0xc000eeab00) (0xc002090be0) Create stream I0330 13:08:39.579812 6 log.go:172] (0xc000eeab00) (0xc002090be0) Stream added, broadcasting: 5 I0330 13:08:39.580790 6 log.go:172] (0xc000eeab00) Reply frame received for 5 I0330 13:08:39.655654 6 log.go:172] (0xc000eeab00) Data frame received for 3 I0330 13:08:39.655679 6 log.go:172] (0xc002090b40) (3) Data frame handling I0330 13:08:39.655691 6 log.go:172] (0xc002090b40) (3) Data frame sent I0330 13:08:39.656393 6 log.go:172] (0xc000eeab00) Data frame received for 5 I0330 13:08:39.656423 6 log.go:172] (0xc002090be0) (5) Data frame handling I0330 13:08:39.656641 6 log.go:172] (0xc000eeab00) Data frame received for 3 I0330 13:08:39.656684 6 log.go:172] (0xc002090b40) (3) Data frame handling I0330 13:08:39.658572 6 log.go:172] (0xc000eeab00) Data frame received for 1 I0330 13:08:39.658611 6 log.go:172] (0xc002090aa0) (1) Data frame handling I0330 13:08:39.658640 6 log.go:172] (0xc002090aa0) (1) Data frame sent I0330 13:08:39.658792 6 log.go:172] (0xc000eeab00) (0xc002090aa0) Stream removed, broadcasting: 1 I0330 13:08:39.658834 6 log.go:172] (0xc000eeab00) Go away received I0330 13:08:39.658912 6 log.go:172] (0xc000eeab00) (0xc002090aa0) Stream removed, broadcasting: 1 I0330 13:08:39.658941 6 log.go:172] (0xc000eeab00) (0xc002090b40) Stream removed, broadcasting: 3 I0330 13:08:39.658963 6 log.go:172] (0xc000eeab00) (0xc002090be0) Stream removed, broadcasting: 5 Mar 30 13:08:39.659: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:08:39.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7467" for this suite. Mar 30 13:09:01.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:09:01.751: INFO: namespace pod-network-test-7467 deletion completed in 22.08876229s • [SLOW TEST:42.468 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:09:01.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 30 13:09:05.855: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c26e20d1-52a0-4a72-bd26-17dac9715987,GenerateName:,Namespace:events-315,SelfLink:/api/v1/namespaces/events-315/pods/send-events-c26e20d1-52a0-4a72-bd26-17dac9715987,UID:5fa9adde-f754-4d60-be9d-fa068e268c11,ResourceVersion:2673126,Generation:0,CreationTimestamp:2020-03-30 13:09:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 817019682,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8gfqv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8gfqv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-8gfqv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aceb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aceb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:09:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:09:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:09:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:09:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.51,StartTime:2020-03-30 13:09:01 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-30 13:09:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://bb114fe76a0b0433795a7cab7820e52f1159a826f78ef54594ded59a0b8d5c5f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 30 13:09:07.860: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 30 13:09:09.864: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:09:09.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-315" for this suite. Mar 30 13:09:53.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:09:53.989: INFO: namespace events-315 deletion completed in 44.115932904s • [SLOW TEST:52.238 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:09:53.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 30 13:09:54.104: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673240,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 13:09:54.104: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673241,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 30 13:09:54.104: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673242,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 30 13:10:04.138: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673264,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 13:10:04.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673265,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 30 13:10:04.138: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3642,SelfLink:/api/v1/namespaces/watch-3642/configmaps/e2e-watch-test-label-changed,UID:eae4a63a-acbf-4352-a819-9bcbe5230965,ResourceVersion:2673266,Generation:0,CreationTimestamp:2020-03-30 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:10:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3642" for this suite. Mar 30 13:10:10.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:10:10.255: INFO: namespace watch-3642 deletion completed in 6.112375761s • [SLOW TEST:16.266 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:10:10.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 30 13:10:10.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 30 13:10:10.388: INFO: stderr: "" Mar 30 13:10:10.388: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:10:10.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9114" for this suite. Mar 30 13:10:16.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:10:16.483: INFO: namespace kubectl-9114 deletion completed in 6.091973295s • [SLOW TEST:6.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:10:16.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 30 13:10:16.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:16.597: INFO: Number of nodes with available pods: 0 Mar 30 13:10:16.597: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:17.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:17.609: INFO: Number of nodes with available pods: 0 Mar 30 13:10:17.609: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:18.602: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:18.606: INFO: Number of nodes with available pods: 0 Mar 30 13:10:18.606: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:19.602: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:19.606: INFO: Number of nodes with available pods: 0 Mar 30 13:10:19.606: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:20.601: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:20.605: INFO: Number of nodes with available pods: 1 Mar 30 13:10:20.605: INFO: Node iruya-worker2 is running more than one daemon pod Mar 30 13:10:21.602: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:21.606: INFO: Number of nodes with available pods: 2 Mar 30 13:10:21.606: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 30 13:10:21.629: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:21.631: INFO: Number of nodes with available pods: 1 Mar 30 13:10:21.631: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:22.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:22.639: INFO: Number of nodes with available pods: 1 Mar 30 13:10:22.639: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:23.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:23.643: INFO: Number of nodes with available pods: 1 Mar 30 13:10:23.643: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:24.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:24.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:24.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:25.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:25.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:25.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:26.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:26.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:26.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:27.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:27.641: INFO: Number of nodes with available pods: 1 Mar 30 13:10:27.641: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:28.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:28.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:28.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:29.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:29.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:29.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:30.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:30.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:30.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:31.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:31.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:31.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:32.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:32.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:32.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:33.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:33.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:33.640: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:34.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:34.640: INFO: Number of nodes with available pods: 1 Mar 30 13:10:34.641: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:10:35.637: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:10:35.640: INFO: Number of nodes with available pods: 2 Mar 30 13:10:35.640: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-865, will wait for the garbage collector to delete the pods Mar 30 13:10:35.703: INFO: Deleting DaemonSet.extensions daemon-set took: 6.761209ms Mar 30 13:10:36.003: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260722ms Mar 30 13:10:42.206: INFO: Number of nodes with available pods: 0 Mar 30 13:10:42.207: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 13:10:42.209: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-865/daemonsets","resourceVersion":"2673410"},"items":null} Mar 30 13:10:42.211: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-865/pods","resourceVersion":"2673410"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:10:42.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-865" for this suite. Mar 30 13:10:48.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:10:48.359: INFO: namespace daemonsets-865 deletion completed in 6.133899424s • [SLOW TEST:31.876 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:10:48.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 30 13:10:52.939: INFO: Successfully updated pod "pod-update-ddf789a5-03a3-437e-aead-ff80763eec0c" STEP: verifying the updated pod is in kubernetes Mar 30 13:10:52.979: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:10:52.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2169" for this suite. Mar 30 13:11:14.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:11:15.074: INFO: namespace pods-2169 deletion completed in 22.092049999s • [SLOW TEST:26.714 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:11:15.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7145 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7145 STEP: Deleting pre-stop pod Mar 30 13:11:28.203: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:11:28.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7145" for this suite. Mar 30 13:12:06.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:12:06.354: INFO: namespace prestop-7145 deletion completed in 38.138290204s • [SLOW TEST:51.280 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:12:06.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 30 13:12:11.446: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:12:12.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1596" for this suite. Mar 30 13:12:34.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:12:34.711: INFO: namespace replicaset-1596 deletion completed in 22.205243182s • [SLOW TEST:28.357 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:12:34.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 30 13:12:40.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-283040d9-235d-499a-a1e5-d7865945dded -c busybox-main-container --namespace=emptydir-3204 -- cat /usr/share/volumeshare/shareddata.txt' Mar 30 13:12:41.015: INFO: stderr: "I0330 13:12:40.922539 494 log.go:172] (0xc0008c2420) (0xc000290a00) Create stream\nI0330 13:12:40.922600 494 log.go:172] (0xc0008c2420) (0xc000290a00) Stream added, broadcasting: 1\nI0330 13:12:40.924865 494 log.go:172] (0xc0008c2420) Reply frame received for 1\nI0330 13:12:40.924961 494 log.go:172] (0xc0008c2420) (0xc0008fe000) Create stream\nI0330 13:12:40.925010 494 log.go:172] (0xc0008c2420) (0xc0008fe000) Stream added, broadcasting: 3\nI0330 13:12:40.926269 494 log.go:172] (0xc0008c2420) Reply frame received for 3\nI0330 13:12:40.926341 494 log.go:172] (0xc0008c2420) (0xc000290aa0) Create stream\nI0330 13:12:40.926356 494 log.go:172] (0xc0008c2420) (0xc000290aa0) Stream added, broadcasting: 5\nI0330 13:12:40.927501 494 log.go:172] (0xc0008c2420) Reply frame received for 5\nI0330 13:12:41.010525 494 log.go:172] (0xc0008c2420) Data frame received for 3\nI0330 13:12:41.010556 494 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0330 13:12:41.010568 494 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0330 13:12:41.010575 494 log.go:172] (0xc0008c2420) Data frame received for 3\nI0330 13:12:41.010582 494 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0330 13:12:41.010611 494 log.go:172] (0xc0008c2420) Data frame received for 5\nI0330 13:12:41.010635 494 log.go:172] (0xc000290aa0) (5) Data frame handling\nI0330 13:12:41.012147 494 log.go:172] (0xc0008c2420) Data frame received for 1\nI0330 13:12:41.012181 494 log.go:172] (0xc000290a00) (1) Data frame handling\nI0330 13:12:41.012201 494 log.go:172] (0xc000290a00) (1) Data frame sent\nI0330 13:12:41.012230 494 log.go:172] (0xc0008c2420) (0xc000290a00) Stream removed, broadcasting: 1\nI0330 13:12:41.012315 494 log.go:172] (0xc0008c2420) Go away received\nI0330 13:12:41.012542 494 log.go:172] (0xc0008c2420) (0xc000290a00) Stream removed, broadcasting: 1\nI0330 13:12:41.012557 494 log.go:172] (0xc0008c2420) (0xc0008fe000) Stream removed, broadcasting: 3\nI0330 13:12:41.012563 494 log.go:172] (0xc0008c2420) (0xc000290aa0) Stream removed, broadcasting: 5\n" Mar 30 13:12:41.015: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:12:41.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3204" for this suite. Mar 30 13:12:47.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:12:47.119: INFO: namespace emptydir-3204 deletion completed in 6.099333546s • [SLOW TEST:12.407 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:12:47.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9547 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 30 13:12:47.212: INFO: Found 0 stateful pods, waiting for 3 Mar 30 13:12:57.217: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:12:57.217: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:12:57.217: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 30 13:12:57.242: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 30 13:13:07.278: INFO: Updating stateful set ss2 Mar 30 13:13:07.302: INFO: Waiting for Pod statefulset-9547/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 30 13:13:17.464: INFO: Found 2 stateful pods, waiting for 3 Mar 30 13:13:27.468: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:13:27.469: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:13:27.469: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 30 13:13:27.490: INFO: Updating stateful set ss2 Mar 30 13:13:27.516: INFO: Waiting for Pod statefulset-9547/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 30 13:13:37.543: INFO: Updating stateful set ss2 Mar 30 13:13:37.602: INFO: Waiting for StatefulSet statefulset-9547/ss2 to complete update Mar 30 13:13:37.602: INFO: Waiting for Pod statefulset-9547/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 30 13:13:47.611: INFO: Deleting all statefulset in ns statefulset-9547 Mar 30 13:13:47.614: INFO: Scaling statefulset ss2 to 0 Mar 30 13:14:07.631: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 13:14:07.635: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:14:07.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9547" for this suite. Mar 30 13:14:13.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:14:13.748: INFO: namespace statefulset-9547 deletion completed in 6.091333294s • [SLOW TEST:86.628 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:14:13.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:14:13.828: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 30 13:14:18.833: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 13:14:18.833: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 30 13:14:20.838: INFO: Creating deployment "test-rollover-deployment" Mar 30 13:14:20.857: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 30 13:14:22.865: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 30 13:14:22.872: INFO: Ensure that both replica sets have 1 created replica Mar 30 13:14:22.878: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 30 13:14:22.886: INFO: Updating deployment test-rollover-deployment Mar 30 13:14:22.886: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 30 13:14:24.904: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 30 13:14:24.910: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 30 13:14:24.916: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:24.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170863, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:26.925: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:26.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170866, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:28.925: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:28.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170866, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:30.925: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:30.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170866, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:32.924: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:32.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170866, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:34.923: INFO: all replica sets need to contain the pod-template-hash label Mar 30 13:14:34.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170866, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721170860, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:14:36.924: INFO: Mar 30 13:14:36.924: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 30 13:14:36.931: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8357,SelfLink:/apis/apps/v1/namespaces/deployment-8357/deployments/test-rollover-deployment,UID:ae4746b6-3653-4b7e-a65c-8b307ccdbe6c,ResourceVersion:2674365,Generation:2,CreationTimestamp:2020-03-30 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-30 13:14:20 +0000 UTC 2020-03-30 13:14:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-30 13:14:36 +0000 UTC 2020-03-30 13:14:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 30 13:14:36.934: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8357,SelfLink:/apis/apps/v1/namespaces/deployment-8357/replicasets/test-rollover-deployment-854595fc44,UID:0dc266fc-d1a1-4e58-8df8-3a0aec5567d5,ResourceVersion:2674353,Generation:2,CreationTimestamp:2020-03-30 13:14:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ae4746b6-3653-4b7e-a65c-8b307ccdbe6c 0xc002ccf5a7 0xc002ccf5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 30 13:14:36.934: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 30 13:14:36.934: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8357,SelfLink:/apis/apps/v1/namespaces/deployment-8357/replicasets/test-rollover-controller,UID:94238c28-b452-49bd-8d07-9faf79038511,ResourceVersion:2674364,Generation:2,CreationTimestamp:2020-03-30 13:14:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ae4746b6-3653-4b7e-a65c-8b307ccdbe6c 0xc002ccf42f 0xc002ccf440}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:14:36.934: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8357,SelfLink:/apis/apps/v1/namespaces/deployment-8357/replicasets/test-rollover-deployment-9b8b997cf,UID:454fdd20-d379-4f18-89aa-796f1d269b32,ResourceVersion:2674318,Generation:2,CreationTimestamp:2020-03-30 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ae4746b6-3653-4b7e-a65c-8b307ccdbe6c 0xc002ccf6e0 0xc002ccf6e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:14:36.938: INFO: Pod "test-rollover-deployment-854595fc44-szxnh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-szxnh,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8357,SelfLink:/api/v1/namespaces/deployment-8357/pods/test-rollover-deployment-854595fc44-szxnh,UID:9fe24996-4223-4e12-8e60-af64782a8e79,ResourceVersion:2674331,Generation:0,CreationTimestamp:2020-03-30 13:14:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0dc266fc-d1a1-4e58-8df8-3a0aec5567d5 0xc002f36797 0xc002f36798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-74bfh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-74bfh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-74bfh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f36810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f36830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:14:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:14:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:14:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:14:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.92,StartTime:2020-03-30 13:14:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-30 13:14:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b950a822315f6cde07f68a20ad843514d7bc57d3f50188b002c7512c11cb200d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:14:36.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8357" for this suite. Mar 30 13:14:42.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:14:43.043: INFO: namespace deployment-8357 deletion completed in 6.101450308s • [SLOW TEST:29.294 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:14:43.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:15:43.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4732" for this suite. Mar 30 13:16:05.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:16:05.219: INFO: namespace container-probe-4732 deletion completed in 22.090379009s • [SLOW TEST:82.176 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:16:05.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 30 13:16:05.280: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:16:05.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8886" for this suite. Mar 30 13:16:11.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:16:11.461: INFO: namespace kubectl-8886 deletion completed in 6.096979066s • [SLOW TEST:6.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:16:11.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 13:16:17.543: INFO: DNS probes using dns-test-b54e0bed-214d-4f6b-9e17-08abc1f10464 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 13:16:23.642: INFO: File wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:23.645: INFO: File jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:23.645: INFO: Lookups using dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec failed for: [wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local] Mar 30 13:16:28.650: INFO: File wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:28.655: INFO: File jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:28.655: INFO: Lookups using dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec failed for: [wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local] Mar 30 13:16:33.650: INFO: File wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:33.653: INFO: File jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:33.653: INFO: Lookups using dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec failed for: [wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local] Mar 30 13:16:38.650: INFO: File wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:38.654: INFO: File jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:38.654: INFO: Lookups using dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec failed for: [wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local] Mar 30 13:16:43.650: INFO: File wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:43.653: INFO: File jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local from pod dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 13:16:43.653: INFO: Lookups using dns-3752/dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec failed for: [wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local] Mar 30 13:16:48.654: INFO: DNS probes using dns-test-c1279d8e-e52b-4cc1-a332-e5186e80acec succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3752.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3752.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 13:16:55.154: INFO: DNS probes using dns-test-9110386f-87b5-4a8d-ae9b-cb28398119d8 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:16:55.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3752" for this suite. Mar 30 13:17:01.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:01.335: INFO: namespace dns-3752 deletion completed in 6.10904519s • [SLOW TEST:49.874 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:01.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:17:01.396: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 30 13:17:01.416: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 30 13:17:06.421: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 13:17:06.421: INFO: Creating deployment "test-rolling-update-deployment" Mar 30 13:17:06.425: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 30 13:17:06.440: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 30 13:17:08.448: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 30 13:17:08.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171026, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171026, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171026, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171026, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:17:10.456: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 30 13:17:10.467: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4933,SelfLink:/apis/apps/v1/namespaces/deployment-4933/deployments/test-rolling-update-deployment,UID:a7d56b23-56e8-45e5-b71a-5d82ae856e13,ResourceVersion:2674905,Generation:1,CreationTimestamp:2020-03-30 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-30 13:17:06 +0000 UTC 2020-03-30 13:17:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-30 13:17:09 +0000 UTC 2020-03-30 13:17:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 30 13:17:10.470: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4933,SelfLink:/apis/apps/v1/namespaces/deployment-4933/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:83c37d44-dd2c-4c5e-b283-2177735dafee,ResourceVersion:2674894,Generation:1,CreationTimestamp:2020-03-30 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a7d56b23-56e8-45e5-b71a-5d82ae856e13 0xc0026ad457 0xc0026ad458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 30 13:17:10.470: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 30 13:17:10.470: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4933,SelfLink:/apis/apps/v1/namespaces/deployment-4933/replicasets/test-rolling-update-controller,UID:0ec63901-978d-4340-88a6-c3d00cb01341,ResourceVersion:2674903,Generation:2,CreationTimestamp:2020-03-30 13:17:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a7d56b23-56e8-45e5-b71a-5d82ae856e13 0xc0026ad387 0xc0026ad388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:17:10.473: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-zln9j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-zln9j,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4933,SelfLink:/api/v1/namespaces/deployment-4933/pods/test-rolling-update-deployment-79f6b9d75c-zln9j,UID:d1f72707-bb9d-4a87-8b00-40b996ee2e46,ResourceVersion:2674893,Generation:0,CreationTimestamp:2020-03-30 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 83c37d44-dd2c-4c5e-b283-2177735dafee 0xc0026add37 0xc0026add38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t8fnj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t8fnj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t8fnj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026addb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026addd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:17:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:17:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:17:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:17:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.65,StartTime:2020-03-30 13:17:06 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-30 13:17:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e3a9bb861d74ed0d356f8807d11fe7931a3622ca41119993cdacd2ab0c7ee2ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:17:10.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4933" for this suite. Mar 30 13:17:16.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:16.620: INFO: namespace deployment-4933 deletion completed in 6.144190277s • [SLOW TEST:15.285 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:16.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:17:16.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9047" for this suite. Mar 30 13:17:22.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:22.782: INFO: namespace services-9047 deletion completed in 6.102295599s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.161 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:22.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:17:22.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46" in namespace "downward-api-4741" to be "success or failure" Mar 30 13:17:22.848: INFO: Pod "downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.566693ms Mar 30 13:17:24.852: INFO: Pod "downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014886443s Mar 30 13:17:26.857: INFO: Pod "downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019656832s STEP: Saw pod success Mar 30 13:17:26.857: INFO: Pod "downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46" satisfied condition "success or failure" Mar 30 13:17:26.860: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46 container client-container: STEP: delete the pod Mar 30 13:17:26.879: INFO: Waiting for pod downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46 to disappear Mar 30 13:17:26.887: INFO: Pod downwardapi-volume-785866cd-c951-4380-8738-d5f8b2a58f46 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:17:26.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4741" for this suite. Mar 30 13:17:32.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:32.980: INFO: namespace downward-api-4741 deletion completed in 6.090409038s • [SLOW TEST:10.198 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:32.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:17:37.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7050" for this suite. Mar 30 13:17:43.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:43.279: INFO: namespace emptydir-wrapper-7050 deletion completed in 6.095129284s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:43.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1211.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 13:17:49.405: INFO: DNS probes using dns-1211/dns-test-d99056cb-8dcb-48ef-a06a-9f4713fb1651 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:17:49.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1211" for this suite. Mar 30 13:17:55.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:17:55.655: INFO: namespace dns-1211 deletion completed in 6.192731269s • [SLOW TEST:12.375 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:17:55.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 13:17:55.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4702' Mar 30 13:17:58.164: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 13:17:58.164: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 30 13:17:58.174: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 30 13:17:58.243: INFO: scanned /root for discovery docs: Mar 30 13:17:58.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4702' Mar 30 13:18:14.077: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 30 13:18:14.077: INFO: stdout: "Created e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4\nScaling up e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 30 13:18:14.077: INFO: stdout: "Created e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4\nScaling up e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 30 13:18:14.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4702' Mar 30 13:18:14.170: INFO: stderr: "" Mar 30 13:18:14.170: INFO: stdout: "e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4-kjtnf " Mar 30 13:18:14.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4-kjtnf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4702' Mar 30 13:18:14.262: INFO: stderr: "" Mar 30 13:18:14.262: INFO: stdout: "true" Mar 30 13:18:14.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4-kjtnf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4702' Mar 30 13:18:14.354: INFO: stderr: "" Mar 30 13:18:14.354: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 30 13:18:14.355: INFO: e2e-test-nginx-rc-afba8c7dd6a12fde706a841918798fe4-kjtnf is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 30 13:18:14.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4702' Mar 30 13:18:14.446: INFO: stderr: "" Mar 30 13:18:14.446: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:18:14.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4702" for this suite. Mar 30 13:18:36.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:18:36.540: INFO: namespace kubectl-4702 deletion completed in 22.090051094s • [SLOW TEST:40.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:18:36.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:18:36.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd" in namespace "downward-api-5688" to be "success or failure" Mar 30 13:18:36.649: INFO: Pod "downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.238786ms Mar 30 13:18:38.654: INFO: Pod "downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028018489s Mar 30 13:18:40.658: INFO: Pod "downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032528717s STEP: Saw pod success Mar 30 13:18:40.658: INFO: Pod "downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd" satisfied condition "success or failure" Mar 30 13:18:40.662: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd container client-container: STEP: delete the pod Mar 30 13:18:40.684: INFO: Waiting for pod downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd to disappear Mar 30 13:18:40.698: INFO: Pod downwardapi-volume-c486fa9d-05ef-4638-965e-59ffb12be1cd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:18:40.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5688" for this suite. Mar 30 13:18:46.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:18:46.796: INFO: namespace downward-api-5688 deletion completed in 6.094154763s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:18:46.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 30 13:18:56.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:56.895: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:56.934477 6 log.go:172] (0xc001b34790) (0xc000fd95e0) Create stream I0330 13:18:56.934503 6 log.go:172] (0xc001b34790) (0xc000fd95e0) Stream added, broadcasting: 1 I0330 13:18:56.936206 6 log.go:172] (0xc001b34790) Reply frame received for 1 I0330 13:18:56.936234 6 log.go:172] (0xc001b34790) (0xc0020919a0) Create stream I0330 13:18:56.936246 6 log.go:172] (0xc001b34790) (0xc0020919a0) Stream added, broadcasting: 3 I0330 13:18:56.937296 6 log.go:172] (0xc001b34790) Reply frame received for 3 I0330 13:18:56.937342 6 log.go:172] (0xc001b34790) (0xc000fd97c0) Create stream I0330 13:18:56.937352 6 log.go:172] (0xc001b34790) (0xc000fd97c0) Stream added, broadcasting: 5 I0330 13:18:56.938104 6 log.go:172] (0xc001b34790) Reply frame received for 5 I0330 13:18:57.012208 6 log.go:172] (0xc001b34790) Data frame received for 5 I0330 13:18:57.012239 6 log.go:172] (0xc000fd97c0) (5) Data frame handling I0330 13:18:57.012272 6 log.go:172] (0xc001b34790) Data frame received for 3 I0330 13:18:57.012285 6 log.go:172] (0xc0020919a0) (3) Data frame handling I0330 13:18:57.012300 6 log.go:172] (0xc0020919a0) (3) Data frame sent I0330 13:18:57.012313 6 log.go:172] (0xc001b34790) Data frame received for 3 I0330 13:18:57.012325 6 log.go:172] (0xc0020919a0) (3) Data frame handling I0330 13:18:57.013951 6 log.go:172] (0xc001b34790) Data frame received for 1 I0330 13:18:57.013985 6 log.go:172] (0xc000fd95e0) (1) Data frame handling I0330 13:18:57.014003 6 log.go:172] (0xc000fd95e0) (1) Data frame sent I0330 13:18:57.014018 6 log.go:172] (0xc001b34790) (0xc000fd95e0) Stream removed, broadcasting: 1 I0330 13:18:57.014043 6 log.go:172] (0xc001b34790) Go away received I0330 13:18:57.014172 6 log.go:172] (0xc001b34790) (0xc000fd95e0) Stream removed, broadcasting: 1 I0330 13:18:57.014195 6 log.go:172] (0xc001b34790) (0xc0020919a0) Stream removed, broadcasting: 3 I0330 13:18:57.014209 6 log.go:172] (0xc001b34790) (0xc000fd97c0) Stream removed, broadcasting: 5 Mar 30 13:18:57.014: INFO: Exec stderr: "" Mar 30 13:18:57.014: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.014: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.043660 6 log.go:172] (0xc001c2aa50) (0xc000e3a820) Create stream I0330 13:18:57.043694 6 log.go:172] (0xc001c2aa50) (0xc000e3a820) Stream added, broadcasting: 1 I0330 13:18:57.046588 6 log.go:172] (0xc001c2aa50) Reply frame received for 1 I0330 13:18:57.046638 6 log.go:172] (0xc001c2aa50) (0xc002091a40) Create stream I0330 13:18:57.046661 6 log.go:172] (0xc001c2aa50) (0xc002091a40) Stream added, broadcasting: 3 I0330 13:18:57.047742 6 log.go:172] (0xc001c2aa50) Reply frame received for 3 I0330 13:18:57.047793 6 log.go:172] (0xc001c2aa50) (0xc000fd9860) Create stream I0330 13:18:57.047820 6 log.go:172] (0xc001c2aa50) (0xc000fd9860) Stream added, broadcasting: 5 I0330 13:18:57.048906 6 log.go:172] (0xc001c2aa50) Reply frame received for 5 I0330 13:18:57.129422 6 log.go:172] (0xc001c2aa50) Data frame received for 3 I0330 13:18:57.129448 6 log.go:172] (0xc002091a40) (3) Data frame handling I0330 13:18:57.129465 6 log.go:172] (0xc002091a40) (3) Data frame sent I0330 13:18:57.129473 6 log.go:172] (0xc001c2aa50) Data frame received for 3 I0330 13:18:57.129479 6 log.go:172] (0xc002091a40) (3) Data frame handling I0330 13:18:57.129603 6 log.go:172] (0xc001c2aa50) Data frame received for 5 I0330 13:18:57.129638 6 log.go:172] (0xc000fd9860) (5) Data frame handling I0330 13:18:57.130908 6 log.go:172] (0xc001c2aa50) Data frame received for 1 I0330 13:18:57.130942 6 log.go:172] (0xc000e3a820) (1) Data frame handling I0330 13:18:57.130963 6 log.go:172] (0xc000e3a820) (1) Data frame sent I0330 13:18:57.130972 6 log.go:172] (0xc001c2aa50) (0xc000e3a820) Stream removed, broadcasting: 1 I0330 13:18:57.131056 6 log.go:172] (0xc001c2aa50) (0xc000e3a820) Stream removed, broadcasting: 1 I0330 13:18:57.131068 6 log.go:172] (0xc001c2aa50) (0xc002091a40) Stream removed, broadcasting: 3 I0330 13:18:57.131169 6 log.go:172] (0xc001c2aa50) (0xc000fd9860) Stream removed, broadcasting: 5 Mar 30 13:18:57.131: INFO: Exec stderr: "" Mar 30 13:18:57.131: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.131: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.131270 6 log.go:172] (0xc001c2aa50) Go away received I0330 13:18:57.159782 6 log.go:172] (0xc001c2b760) (0xc000e3ad20) Create stream I0330 13:18:57.159814 6 log.go:172] (0xc001c2b760) (0xc000e3ad20) Stream added, broadcasting: 1 I0330 13:18:57.166898 6 log.go:172] (0xc001c2b760) Reply frame received for 1 I0330 13:18:57.167005 6 log.go:172] (0xc001c2b760) (0xc000fd9ae0) Create stream I0330 13:18:57.167052 6 log.go:172] (0xc001c2b760) (0xc000fd9ae0) Stream added, broadcasting: 3 I0330 13:18:57.168647 6 log.go:172] (0xc001c2b760) Reply frame received for 3 I0330 13:18:57.168717 6 log.go:172] (0xc001c2b760) (0xc001f9ff40) Create stream I0330 13:18:57.168734 6 log.go:172] (0xc001c2b760) (0xc001f9ff40) Stream added, broadcasting: 5 I0330 13:18:57.170316 6 log.go:172] (0xc001c2b760) Reply frame received for 5 I0330 13:18:57.220880 6 log.go:172] (0xc001c2b760) Data frame received for 3 I0330 13:18:57.220918 6 log.go:172] (0xc000fd9ae0) (3) Data frame handling I0330 13:18:57.220933 6 log.go:172] (0xc000fd9ae0) (3) Data frame sent I0330 13:18:57.220943 6 log.go:172] (0xc001c2b760) Data frame received for 3 I0330 13:18:57.220952 6 log.go:172] (0xc000fd9ae0) (3) Data frame handling I0330 13:18:57.220974 6 log.go:172] (0xc001c2b760) Data frame received for 5 I0330 13:18:57.220994 6 log.go:172] (0xc001f9ff40) (5) Data frame handling I0330 13:18:57.222555 6 log.go:172] (0xc001c2b760) Data frame received for 1 I0330 13:18:57.222570 6 log.go:172] (0xc000e3ad20) (1) Data frame handling I0330 13:18:57.222580 6 log.go:172] (0xc000e3ad20) (1) Data frame sent I0330 13:18:57.222589 6 log.go:172] (0xc001c2b760) (0xc000e3ad20) Stream removed, broadcasting: 1 I0330 13:18:57.222598 6 log.go:172] (0xc001c2b760) Go away received I0330 13:18:57.222773 6 log.go:172] (0xc001c2b760) (0xc000e3ad20) Stream removed, broadcasting: 1 I0330 13:18:57.222803 6 log.go:172] (0xc001c2b760) (0xc000fd9ae0) Stream removed, broadcasting: 3 I0330 13:18:57.222815 6 log.go:172] (0xc001c2b760) (0xc001f9ff40) Stream removed, broadcasting: 5 Mar 30 13:18:57.222: INFO: Exec stderr: "" Mar 30 13:18:57.222: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.222: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.256805 6 log.go:172] (0xc002c7c4d0) (0xc000e3b220) Create stream I0330 13:18:57.256831 6 log.go:172] (0xc002c7c4d0) (0xc000e3b220) Stream added, broadcasting: 1 I0330 13:18:57.259148 6 log.go:172] (0xc002c7c4d0) Reply frame received for 1 I0330 13:18:57.259214 6 log.go:172] (0xc002c7c4d0) (0xc000fd9cc0) Create stream I0330 13:18:57.259231 6 log.go:172] (0xc002c7c4d0) (0xc000fd9cc0) Stream added, broadcasting: 3 I0330 13:18:57.259932 6 log.go:172] (0xc002c7c4d0) Reply frame received for 3 I0330 13:18:57.259961 6 log.go:172] (0xc002c7c4d0) (0xc001ff2000) Create stream I0330 13:18:57.259970 6 log.go:172] (0xc002c7c4d0) (0xc001ff2000) Stream added, broadcasting: 5 I0330 13:18:57.260635 6 log.go:172] (0xc002c7c4d0) Reply frame received for 5 I0330 13:18:57.309636 6 log.go:172] (0xc002c7c4d0) Data frame received for 5 I0330 13:18:57.309663 6 log.go:172] (0xc001ff2000) (5) Data frame handling I0330 13:18:57.309699 6 log.go:172] (0xc002c7c4d0) Data frame received for 3 I0330 13:18:57.309730 6 log.go:172] (0xc000fd9cc0) (3) Data frame handling I0330 13:18:57.309758 6 log.go:172] (0xc000fd9cc0) (3) Data frame sent I0330 13:18:57.309772 6 log.go:172] (0xc002c7c4d0) Data frame received for 3 I0330 13:18:57.309783 6 log.go:172] (0xc000fd9cc0) (3) Data frame handling I0330 13:18:57.311221 6 log.go:172] (0xc002c7c4d0) Data frame received for 1 I0330 13:18:57.311246 6 log.go:172] (0xc000e3b220) (1) Data frame handling I0330 13:18:57.311258 6 log.go:172] (0xc000e3b220) (1) Data frame sent I0330 13:18:57.311273 6 log.go:172] (0xc002c7c4d0) (0xc000e3b220) Stream removed, broadcasting: 1 I0330 13:18:57.311289 6 log.go:172] (0xc002c7c4d0) Go away received I0330 13:18:57.311460 6 log.go:172] (0xc002c7c4d0) (0xc000e3b220) Stream removed, broadcasting: 1 I0330 13:18:57.311495 6 log.go:172] (0xc002c7c4d0) (0xc000fd9cc0) Stream removed, broadcasting: 3 I0330 13:18:57.311508 6 log.go:172] (0xc002c7c4d0) (0xc001ff2000) Stream removed, broadcasting: 5 Mar 30 13:18:57.311: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 30 13:18:57.311: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.311: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.346462 6 log.go:172] (0xc002191340) (0xc002091e00) Create stream I0330 13:18:57.346492 6 log.go:172] (0xc002191340) (0xc002091e00) Stream added, broadcasting: 1 I0330 13:18:57.348477 6 log.go:172] (0xc002191340) Reply frame received for 1 I0330 13:18:57.348526 6 log.go:172] (0xc002191340) (0xc000e3b2c0) Create stream I0330 13:18:57.348539 6 log.go:172] (0xc002191340) (0xc000e3b2c0) Stream added, broadcasting: 3 I0330 13:18:57.351367 6 log.go:172] (0xc002191340) Reply frame received for 3 I0330 13:18:57.351556 6 log.go:172] (0xc002191340) (0xc0011e4be0) Create stream I0330 13:18:57.351578 6 log.go:172] (0xc002191340) (0xc0011e4be0) Stream added, broadcasting: 5 I0330 13:18:57.352885 6 log.go:172] (0xc002191340) Reply frame received for 5 I0330 13:18:57.425248 6 log.go:172] (0xc002191340) Data frame received for 3 I0330 13:18:57.425295 6 log.go:172] (0xc000e3b2c0) (3) Data frame handling I0330 13:18:57.425315 6 log.go:172] (0xc000e3b2c0) (3) Data frame sent I0330 13:18:57.425330 6 log.go:172] (0xc002191340) Data frame received for 3 I0330 13:18:57.425340 6 log.go:172] (0xc000e3b2c0) (3) Data frame handling I0330 13:18:57.425354 6 log.go:172] (0xc002191340) Data frame received for 5 I0330 13:18:57.425384 6 log.go:172] (0xc0011e4be0) (5) Data frame handling I0330 13:18:57.426586 6 log.go:172] (0xc002191340) Data frame received for 1 I0330 13:18:57.426611 6 log.go:172] (0xc002091e00) (1) Data frame handling I0330 13:18:57.426629 6 log.go:172] (0xc002091e00) (1) Data frame sent I0330 13:18:57.426697 6 log.go:172] (0xc002191340) (0xc002091e00) Stream removed, broadcasting: 1 I0330 13:18:57.427038 6 log.go:172] (0xc002191340) (0xc002091e00) Stream removed, broadcasting: 1 I0330 13:18:57.427135 6 log.go:172] (0xc002191340) Go away received I0330 13:18:57.427227 6 log.go:172] (0xc002191340) (0xc000e3b2c0) Stream removed, broadcasting: 3 I0330 13:18:57.427290 6 log.go:172] (0xc002191340) (0xc0011e4be0) Stream removed, broadcasting: 5 Mar 30 13:18:57.427: INFO: Exec stderr: "" Mar 30 13:18:57.427: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.427: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.467107 6 log.go:172] (0xc001b79130) (0xc001ff21e0) Create stream I0330 13:18:57.467137 6 log.go:172] (0xc001b79130) (0xc001ff21e0) Stream added, broadcasting: 1 I0330 13:18:57.470488 6 log.go:172] (0xc001b79130) Reply frame received for 1 I0330 13:18:57.470550 6 log.go:172] (0xc001b79130) (0xc00061df40) Create stream I0330 13:18:57.470572 6 log.go:172] (0xc001b79130) (0xc00061df40) Stream added, broadcasting: 3 I0330 13:18:57.471626 6 log.go:172] (0xc001b79130) Reply frame received for 3 I0330 13:18:57.471675 6 log.go:172] (0xc001b79130) (0xc002091ea0) Create stream I0330 13:18:57.471713 6 log.go:172] (0xc001b79130) (0xc002091ea0) Stream added, broadcasting: 5 I0330 13:18:57.472721 6 log.go:172] (0xc001b79130) Reply frame received for 5 I0330 13:18:57.533381 6 log.go:172] (0xc001b79130) Data frame received for 3 I0330 13:18:57.533411 6 log.go:172] (0xc00061df40) (3) Data frame handling I0330 13:18:57.533483 6 log.go:172] (0xc00061df40) (3) Data frame sent I0330 13:18:57.533501 6 log.go:172] (0xc001b79130) Data frame received for 3 I0330 13:18:57.533551 6 log.go:172] (0xc00061df40) (3) Data frame handling I0330 13:18:57.533712 6 log.go:172] (0xc001b79130) Data frame received for 5 I0330 13:18:57.533725 6 log.go:172] (0xc002091ea0) (5) Data frame handling I0330 13:18:57.535240 6 log.go:172] (0xc001b79130) Data frame received for 1 I0330 13:18:57.535277 6 log.go:172] (0xc001ff21e0) (1) Data frame handling I0330 13:18:57.535292 6 log.go:172] (0xc001ff21e0) (1) Data frame sent I0330 13:18:57.535306 6 log.go:172] (0xc001b79130) (0xc001ff21e0) Stream removed, broadcasting: 1 I0330 13:18:57.535344 6 log.go:172] (0xc001b79130) Go away received I0330 13:18:57.535444 6 log.go:172] (0xc001b79130) (0xc001ff21e0) Stream removed, broadcasting: 1 I0330 13:18:57.535455 6 log.go:172] (0xc001b79130) (0xc00061df40) Stream removed, broadcasting: 3 I0330 13:18:57.535460 6 log.go:172] (0xc001b79130) (0xc002091ea0) Stream removed, broadcasting: 5 Mar 30 13:18:57.535: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 30 13:18:57.535: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.535: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.573676 6 log.go:172] (0xc001d04630) (0xc0020f01e0) Create stream I0330 13:18:57.573703 6 log.go:172] (0xc001d04630) (0xc0020f01e0) Stream added, broadcasting: 1 I0330 13:18:57.576693 6 log.go:172] (0xc001d04630) Reply frame received for 1 I0330 13:18:57.576763 6 log.go:172] (0xc001d04630) (0xc0011e4d20) Create stream I0330 13:18:57.576786 6 log.go:172] (0xc001d04630) (0xc0011e4d20) Stream added, broadcasting: 3 I0330 13:18:57.578108 6 log.go:172] (0xc001d04630) Reply frame received for 3 I0330 13:18:57.578193 6 log.go:172] (0xc001d04630) (0xc0023b2000) Create stream I0330 13:18:57.578212 6 log.go:172] (0xc001d04630) (0xc0023b2000) Stream added, broadcasting: 5 I0330 13:18:57.579523 6 log.go:172] (0xc001d04630) Reply frame received for 5 I0330 13:18:57.639250 6 log.go:172] (0xc001d04630) Data frame received for 3 I0330 13:18:57.639275 6 log.go:172] (0xc0011e4d20) (3) Data frame handling I0330 13:18:57.639282 6 log.go:172] (0xc0011e4d20) (3) Data frame sent I0330 13:18:57.639286 6 log.go:172] (0xc001d04630) Data frame received for 3 I0330 13:18:57.639290 6 log.go:172] (0xc0011e4d20) (3) Data frame handling I0330 13:18:57.639321 6 log.go:172] (0xc001d04630) Data frame received for 5 I0330 13:18:57.639362 6 log.go:172] (0xc0023b2000) (5) Data frame handling I0330 13:18:57.641011 6 log.go:172] (0xc001d04630) Data frame received for 1 I0330 13:18:57.641043 6 log.go:172] (0xc0020f01e0) (1) Data frame handling I0330 13:18:57.641065 6 log.go:172] (0xc0020f01e0) (1) Data frame sent I0330 13:18:57.641087 6 log.go:172] (0xc001d04630) (0xc0020f01e0) Stream removed, broadcasting: 1 I0330 13:18:57.641265 6 log.go:172] (0xc001d04630) Go away received I0330 13:18:57.641532 6 log.go:172] (0xc001d04630) (0xc0020f01e0) Stream removed, broadcasting: 1 I0330 13:18:57.641561 6 log.go:172] (0xc001d04630) (0xc0011e4d20) Stream removed, broadcasting: 3 I0330 13:18:57.641584 6 log.go:172] (0xc001d04630) (0xc0023b2000) Stream removed, broadcasting: 5 Mar 30 13:18:57.641: INFO: Exec stderr: "" Mar 30 13:18:57.641: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.641: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.671569 6 log.go:172] (0xc001d04f20) (0xc0020f0500) Create stream I0330 13:18:57.671608 6 log.go:172] (0xc001d04f20) (0xc0020f0500) Stream added, broadcasting: 1 I0330 13:18:57.676174 6 log.go:172] (0xc001d04f20) Reply frame received for 1 I0330 13:18:57.676328 6 log.go:172] (0xc001d04f20) (0xc0020f05a0) Create stream I0330 13:18:57.676405 6 log.go:172] (0xc001d04f20) (0xc0020f05a0) Stream added, broadcasting: 3 I0330 13:18:57.678731 6 log.go:172] (0xc001d04f20) Reply frame received for 3 I0330 13:18:57.678783 6 log.go:172] (0xc001d04f20) (0xc001ff2280) Create stream I0330 13:18:57.678838 6 log.go:172] (0xc001d04f20) (0xc001ff2280) Stream added, broadcasting: 5 I0330 13:18:57.680971 6 log.go:172] (0xc001d04f20) Reply frame received for 5 I0330 13:18:57.741424 6 log.go:172] (0xc001d04f20) Data frame received for 3 I0330 13:18:57.741482 6 log.go:172] (0xc0020f05a0) (3) Data frame handling I0330 13:18:57.741492 6 log.go:172] (0xc0020f05a0) (3) Data frame sent I0330 13:18:57.741498 6 log.go:172] (0xc001d04f20) Data frame received for 3 I0330 13:18:57.741504 6 log.go:172] (0xc0020f05a0) (3) Data frame handling I0330 13:18:57.741534 6 log.go:172] (0xc001d04f20) Data frame received for 5 I0330 13:18:57.741545 6 log.go:172] (0xc001ff2280) (5) Data frame handling I0330 13:18:57.742971 6 log.go:172] (0xc001d04f20) Data frame received for 1 I0330 13:18:57.742984 6 log.go:172] (0xc0020f0500) (1) Data frame handling I0330 13:18:57.742990 6 log.go:172] (0xc0020f0500) (1) Data frame sent I0330 13:18:57.743001 6 log.go:172] (0xc001d04f20) (0xc0020f0500) Stream removed, broadcasting: 1 I0330 13:18:57.743088 6 log.go:172] (0xc001d04f20) Go away received I0330 13:18:57.743139 6 log.go:172] (0xc001d04f20) (0xc0020f0500) Stream removed, broadcasting: 1 I0330 13:18:57.743150 6 log.go:172] (0xc001d04f20) (0xc0020f05a0) Stream removed, broadcasting: 3 I0330 13:18:57.743159 6 log.go:172] (0xc001d04f20) (0xc001ff2280) Stream removed, broadcasting: 5 Mar 30 13:18:57.743: INFO: Exec stderr: "" Mar 30 13:18:57.743: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.743: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.776088 6 log.go:172] (0xc001d05a20) (0xc0020f0780) Create stream I0330 13:18:57.776149 6 log.go:172] (0xc001d05a20) (0xc0020f0780) Stream added, broadcasting: 1 I0330 13:18:57.778789 6 log.go:172] (0xc001d05a20) Reply frame received for 1 I0330 13:18:57.778817 6 log.go:172] (0xc001d05a20) (0xc0011e4f00) Create stream I0330 13:18:57.778827 6 log.go:172] (0xc001d05a20) (0xc0011e4f00) Stream added, broadcasting: 3 I0330 13:18:57.779646 6 log.go:172] (0xc001d05a20) Reply frame received for 3 I0330 13:18:57.779706 6 log.go:172] (0xc001d05a20) (0xc0011e4fa0) Create stream I0330 13:18:57.779721 6 log.go:172] (0xc001d05a20) (0xc0011e4fa0) Stream added, broadcasting: 5 I0330 13:18:57.780701 6 log.go:172] (0xc001d05a20) Reply frame received for 5 I0330 13:18:57.828424 6 log.go:172] (0xc001d05a20) Data frame received for 5 I0330 13:18:57.828465 6 log.go:172] (0xc0011e4fa0) (5) Data frame handling I0330 13:18:57.828499 6 log.go:172] (0xc001d05a20) Data frame received for 3 I0330 13:18:57.828533 6 log.go:172] (0xc0011e4f00) (3) Data frame handling I0330 13:18:57.828559 6 log.go:172] (0xc0011e4f00) (3) Data frame sent I0330 13:18:57.828638 6 log.go:172] (0xc001d05a20) Data frame received for 3 I0330 13:18:57.828661 6 log.go:172] (0xc0011e4f00) (3) Data frame handling I0330 13:18:57.829854 6 log.go:172] (0xc001d05a20) Data frame received for 1 I0330 13:18:57.829877 6 log.go:172] (0xc0020f0780) (1) Data frame handling I0330 13:18:57.829887 6 log.go:172] (0xc0020f0780) (1) Data frame sent I0330 13:18:57.829899 6 log.go:172] (0xc001d05a20) (0xc0020f0780) Stream removed, broadcasting: 1 I0330 13:18:57.829990 6 log.go:172] (0xc001d05a20) Go away received I0330 13:18:57.830029 6 log.go:172] (0xc001d05a20) (0xc0020f0780) Stream removed, broadcasting: 1 I0330 13:18:57.830098 6 log.go:172] (0xc001d05a20) (0xc0011e4f00) Stream removed, broadcasting: 3 I0330 13:18:57.830123 6 log.go:172] (0xc001d05a20) (0xc0011e4fa0) Stream removed, broadcasting: 5 Mar 30 13:18:57.830: INFO: Exec stderr: "" Mar 30 13:18:57.830: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4202 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:18:57.830: INFO: >>> kubeConfig: /root/.kube/config I0330 13:18:57.866043 6 log.go:172] (0xc0023fc4d0) (0xc0020f0960) Create stream I0330 13:18:57.866068 6 log.go:172] (0xc0023fc4d0) (0xc0020f0960) Stream added, broadcasting: 1 I0330 13:18:57.868429 6 log.go:172] (0xc0023fc4d0) Reply frame received for 1 I0330 13:18:57.868571 6 log.go:172] (0xc0023fc4d0) (0xc000e3b4a0) Create stream I0330 13:18:57.868596 6 log.go:172] (0xc0023fc4d0) (0xc000e3b4a0) Stream added, broadcasting: 3 I0330 13:18:57.869689 6 log.go:172] (0xc0023fc4d0) Reply frame received for 3 I0330 13:18:57.869745 6 log.go:172] (0xc0023fc4d0) (0xc001ff25a0) Create stream I0330 13:18:57.869765 6 log.go:172] (0xc0023fc4d0) (0xc001ff25a0) Stream added, broadcasting: 5 I0330 13:18:57.870724 6 log.go:172] (0xc0023fc4d0) Reply frame received for 5 I0330 13:18:57.922774 6 log.go:172] (0xc0023fc4d0) Data frame received for 5 I0330 13:18:57.922853 6 log.go:172] (0xc001ff25a0) (5) Data frame handling I0330 13:18:57.922901 6 log.go:172] (0xc0023fc4d0) Data frame received for 3 I0330 13:18:57.922923 6 log.go:172] (0xc000e3b4a0) (3) Data frame handling I0330 13:18:57.922944 6 log.go:172] (0xc000e3b4a0) (3) Data frame sent I0330 13:18:57.922965 6 log.go:172] (0xc0023fc4d0) Data frame received for 3 I0330 13:18:57.922979 6 log.go:172] (0xc000e3b4a0) (3) Data frame handling I0330 13:18:57.924217 6 log.go:172] (0xc0023fc4d0) Data frame received for 1 I0330 13:18:57.924248 6 log.go:172] (0xc0020f0960) (1) Data frame handling I0330 13:18:57.924267 6 log.go:172] (0xc0020f0960) (1) Data frame sent I0330 13:18:57.924284 6 log.go:172] (0xc0023fc4d0) (0xc0020f0960) Stream removed, broadcasting: 1 I0330 13:18:57.924307 6 log.go:172] (0xc0023fc4d0) Go away received I0330 13:18:57.924491 6 log.go:172] (0xc0023fc4d0) (0xc0020f0960) Stream removed, broadcasting: 1 I0330 13:18:57.924515 6 log.go:172] (0xc0023fc4d0) (0xc000e3b4a0) Stream removed, broadcasting: 3 I0330 13:18:57.924524 6 log.go:172] (0xc0023fc4d0) (0xc001ff25a0) Stream removed, broadcasting: 5 Mar 30 13:18:57.924: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:18:57.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4202" for this suite. Mar 30 13:19:43.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:19:44.082: INFO: namespace e2e-kubelet-etc-hosts-4202 deletion completed in 46.151879269s • [SLOW TEST:57.285 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:19:44.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:19:44.213: INFO: Creating deployment "test-recreate-deployment" Mar 30 13:19:44.217: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 30 13:19:44.228: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 30 13:19:46.235: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 30 13:19:46.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171184, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171184, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171184, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721171184, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:19:48.241: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 30 13:19:48.247: INFO: Updating deployment test-recreate-deployment Mar 30 13:19:48.247: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 30 13:19:48.531: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8299,SelfLink:/apis/apps/v1/namespaces/deployment-8299/deployments/test-recreate-deployment,UID:a17afc2d-d6a5-4168-b7e2-61e952bbce99,ResourceVersion:2675547,Generation:2,CreationTimestamp:2020-03-30 13:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-30 13:19:48 +0000 UTC 2020-03-30 13:19:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-30 13:19:48 +0000 UTC 2020-03-30 13:19:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 30 13:19:48.638: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8299,SelfLink:/apis/apps/v1/namespaces/deployment-8299/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d5449a13-dfd8-44d5-8826-0aa331d4b4fa,ResourceVersion:2675546,Generation:1,CreationTimestamp:2020-03-30 13:19:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a17afc2d-d6a5-4168-b7e2-61e952bbce99 0xc002f039e7 0xc002f039e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:19:48.638: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 30 13:19:48.638: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8299,SelfLink:/apis/apps/v1/namespaces/deployment-8299/replicasets/test-recreate-deployment-6df85df6b9,UID:4041e13d-a7ed-4f5c-bd1e-a1b6650f18b5,ResourceVersion:2675535,Generation:2,CreationTimestamp:2020-03-30 13:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a17afc2d-d6a5-4168-b7e2-61e952bbce99 0xc002f03ab7 0xc002f03ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 13:19:48.642: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8sgd6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8sgd6,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8299,SelfLink:/api/v1/namespaces/deployment-8299/pods/test-recreate-deployment-5c8c9cc69d-8sgd6,UID:2f94aa8e-e910-475c-8678-44301be87e07,ResourceVersion:2675549,Generation:0,CreationTimestamp:2020-03-30 13:19:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d5449a13-dfd8-44d5-8826-0aa331d4b4fa 0xc002476497 0xc002476498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nvnhh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nvnhh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nvnhh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002476510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002476530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:19:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 13:19:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-30 13:19:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:19:48.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8299" for this suite. Mar 30 13:19:54.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:19:54.776: INFO: namespace deployment-8299 deletion completed in 6.130865336s • [SLOW TEST:10.694 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:19:54.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 30 13:19:54.872: INFO: Waiting up to 5m0s for pod "pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca" in namespace "emptydir-1273" to be "success or failure" Mar 30 13:19:54.889: INFO: Pod "pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.815132ms Mar 30 13:19:56.912: INFO: Pod "pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040250343s Mar 30 13:19:58.917: INFO: Pod "pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044700586s STEP: Saw pod success Mar 30 13:19:58.917: INFO: Pod "pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca" satisfied condition "success or failure" Mar 30 13:19:58.920: INFO: Trying to get logs from node iruya-worker pod pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca container test-container: STEP: delete the pod Mar 30 13:19:58.949: INFO: Waiting for pod pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca to disappear Mar 30 13:19:58.959: INFO: Pod pod-5565086d-4f7b-4a12-aec9-0e33f5d3d7ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:19:58.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1273" for this suite. Mar 30 13:20:04.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:20:05.058: INFO: namespace emptydir-1273 deletion completed in 6.094450307s • [SLOW TEST:10.281 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:20:05.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 30 13:20:05.112: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:20:12.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1149" for this suite. Mar 30 13:20:34.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:20:34.919: INFO: namespace init-container-1149 deletion completed in 22.087095429s • [SLOW TEST:29.862 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:20:34.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 30 13:20:34.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-94 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 30 13:20:38.395: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0330 13:20:38.318434 667 log.go:172] (0xc00011c790) (0xc0008b6140) Create stream\nI0330 13:20:38.318495 667 log.go:172] (0xc00011c790) (0xc0008b6140) Stream added, broadcasting: 1\nI0330 13:20:38.322747 667 log.go:172] (0xc00011c790) Reply frame received for 1\nI0330 13:20:38.322813 667 log.go:172] (0xc00011c790) (0xc0005c4140) Create stream\nI0330 13:20:38.322837 667 log.go:172] (0xc00011c790) (0xc0005c4140) Stream added, broadcasting: 3\nI0330 13:20:38.324333 667 log.go:172] (0xc00011c790) Reply frame received for 3\nI0330 13:20:38.324366 667 log.go:172] (0xc00011c790) (0xc0008b6000) Create stream\nI0330 13:20:38.324387 667 log.go:172] (0xc00011c790) (0xc0008b6000) Stream added, broadcasting: 5\nI0330 13:20:38.325375 667 log.go:172] (0xc00011c790) Reply frame received for 5\nI0330 13:20:38.325434 667 log.go:172] (0xc00011c790) (0xc00018a000) Create stream\nI0330 13:20:38.325449 667 log.go:172] (0xc00011c790) (0xc00018a000) Stream added, broadcasting: 7\nI0330 13:20:38.326238 667 log.go:172] (0xc00011c790) Reply frame received for 7\nI0330 13:20:38.326375 667 log.go:172] (0xc0005c4140) (3) Writing data frame\nI0330 13:20:38.326479 667 log.go:172] (0xc0005c4140) (3) Writing data frame\nI0330 13:20:38.327254 667 log.go:172] (0xc00011c790) Data frame received for 5\nI0330 13:20:38.327270 667 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0330 13:20:38.327282 667 log.go:172] (0xc0008b6000) (5) Data frame sent\nI0330 13:20:38.327771 667 log.go:172] (0xc00011c790) Data frame received for 5\nI0330 13:20:38.327788 667 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0330 13:20:38.327809 667 log.go:172] (0xc0008b6000) (5) Data frame sent\nI0330 13:20:38.373093 667 log.go:172] (0xc00011c790) Data frame received for 7\nI0330 13:20:38.373275 667 log.go:172] (0xc00018a000) (7) Data frame handling\nI0330 13:20:38.373298 667 log.go:172] (0xc00011c790) Data frame received for 5\nI0330 13:20:38.373307 667 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0330 13:20:38.373719 667 log.go:172] (0xc00011c790) Data frame received for 1\nI0330 13:20:38.373742 667 log.go:172] (0xc0008b6140) (1) Data frame handling\nI0330 13:20:38.373768 667 log.go:172] (0xc0008b6140) (1) Data frame sent\nI0330 13:20:38.373782 667 log.go:172] (0xc00011c790) (0xc0008b6140) Stream removed, broadcasting: 1\nI0330 13:20:38.373844 667 log.go:172] (0xc00011c790) (0xc0008b6140) Stream removed, broadcasting: 1\nI0330 13:20:38.373862 667 log.go:172] (0xc00011c790) (0xc0005c4140) Stream removed, broadcasting: 3\nI0330 13:20:38.373874 667 log.go:172] (0xc00011c790) (0xc0008b6000) Stream removed, broadcasting: 5\nI0330 13:20:38.374040 667 log.go:172] (0xc00011c790) (0xc00018a000) Stream removed, broadcasting: 7\nI0330 13:20:38.374877 667 log.go:172] (0xc00011c790) Go away received\n" Mar 30 13:20:38.395: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:20:40.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-94" for this suite. Mar 30 13:20:46.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:20:46.509: INFO: namespace kubectl-94 deletion completed in 6.102765063s • [SLOW TEST:11.589 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:20:46.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9991 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 13:20:46.558: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 13:21:12.696: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9991 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:21:12.696: INFO: >>> kubeConfig: /root/.kube/config I0330 13:21:12.727562 6 log.go:172] (0xc002b69080) (0xc001c254a0) Create stream I0330 13:21:12.727595 6 log.go:172] (0xc002b69080) (0xc001c254a0) Stream added, broadcasting: 1 I0330 13:21:12.730104 6 log.go:172] (0xc002b69080) Reply frame received for 1 I0330 13:21:12.730142 6 log.go:172] (0xc002b69080) (0xc001c25540) Create stream I0330 13:21:12.730155 6 log.go:172] (0xc002b69080) (0xc001c25540) Stream added, broadcasting: 3 I0330 13:21:12.731033 6 log.go:172] (0xc002b69080) Reply frame received for 3 I0330 13:21:12.731074 6 log.go:172] (0xc002b69080) (0xc001c255e0) Create stream I0330 13:21:12.731087 6 log.go:172] (0xc002b69080) (0xc001c255e0) Stream added, broadcasting: 5 I0330 13:21:12.731878 6 log.go:172] (0xc002b69080) Reply frame received for 5 I0330 13:21:13.817466 6 log.go:172] (0xc002b69080) Data frame received for 5 I0330 13:21:13.817531 6 log.go:172] (0xc001c255e0) (5) Data frame handling I0330 13:21:13.817587 6 log.go:172] (0xc002b69080) Data frame received for 3 I0330 13:21:13.817600 6 log.go:172] (0xc001c25540) (3) Data frame handling I0330 13:21:13.817628 6 log.go:172] (0xc001c25540) (3) Data frame sent I0330 13:21:13.817767 6 log.go:172] (0xc002b69080) Data frame received for 3 I0330 13:21:13.817785 6 log.go:172] (0xc001c25540) (3) Data frame handling I0330 13:21:13.819750 6 log.go:172] (0xc002b69080) Data frame received for 1 I0330 13:21:13.819787 6 log.go:172] (0xc001c254a0) (1) Data frame handling I0330 13:21:13.819811 6 log.go:172] (0xc001c254a0) (1) Data frame sent I0330 13:21:13.819835 6 log.go:172] (0xc002b69080) (0xc001c254a0) Stream removed, broadcasting: 1 I0330 13:21:13.819853 6 log.go:172] (0xc002b69080) Go away received I0330 13:21:13.820209 6 log.go:172] (0xc002b69080) (0xc001c254a0) Stream removed, broadcasting: 1 I0330 13:21:13.820231 6 log.go:172] (0xc002b69080) (0xc001c25540) Stream removed, broadcasting: 3 I0330 13:21:13.820242 6 log.go:172] (0xc002b69080) (0xc001c255e0) Stream removed, broadcasting: 5 Mar 30 13:21:13.820: INFO: Found all expected endpoints: [netserver-0] Mar 30 13:21:13.824: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.99 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9991 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 13:21:13.824: INFO: >>> kubeConfig: /root/.kube/config I0330 13:21:13.865777 6 log.go:172] (0xc0022f1600) (0xc0025e1860) Create stream I0330 13:21:13.865813 6 log.go:172] (0xc0022f1600) (0xc0025e1860) Stream added, broadcasting: 1 I0330 13:21:13.869431 6 log.go:172] (0xc0022f1600) Reply frame received for 1 I0330 13:21:13.869516 6 log.go:172] (0xc0022f1600) (0xc001c25680) Create stream I0330 13:21:13.869539 6 log.go:172] (0xc0022f1600) (0xc001c25680) Stream added, broadcasting: 3 I0330 13:21:13.871287 6 log.go:172] (0xc0022f1600) Reply frame received for 3 I0330 13:21:13.871329 6 log.go:172] (0xc0022f1600) (0xc001c257c0) Create stream I0330 13:21:13.871343 6 log.go:172] (0xc0022f1600) (0xc001c257c0) Stream added, broadcasting: 5 I0330 13:21:13.872257 6 log.go:172] (0xc0022f1600) Reply frame received for 5 I0330 13:21:14.961298 6 log.go:172] (0xc0022f1600) Data frame received for 3 I0330 13:21:14.961346 6 log.go:172] (0xc001c25680) (3) Data frame handling I0330 13:21:14.961381 6 log.go:172] (0xc001c25680) (3) Data frame sent I0330 13:21:14.961408 6 log.go:172] (0xc0022f1600) Data frame received for 3 I0330 13:21:14.961426 6 log.go:172] (0xc001c25680) (3) Data frame handling I0330 13:21:14.961448 6 log.go:172] (0xc0022f1600) Data frame received for 5 I0330 13:21:14.961498 6 log.go:172] (0xc001c257c0) (5) Data frame handling I0330 13:21:14.963294 6 log.go:172] (0xc0022f1600) Data frame received for 1 I0330 13:21:14.963318 6 log.go:172] (0xc0025e1860) (1) Data frame handling I0330 13:21:14.963332 6 log.go:172] (0xc0025e1860) (1) Data frame sent I0330 13:21:14.963342 6 log.go:172] (0xc0022f1600) (0xc0025e1860) Stream removed, broadcasting: 1 I0330 13:21:14.963466 6 log.go:172] (0xc0022f1600) (0xc0025e1860) Stream removed, broadcasting: 1 I0330 13:21:14.963484 6 log.go:172] (0xc0022f1600) (0xc001c25680) Stream removed, broadcasting: 3 I0330 13:21:14.963514 6 log.go:172] (0xc0022f1600) Go away received I0330 13:21:14.963577 6 log.go:172] (0xc0022f1600) (0xc001c257c0) Stream removed, broadcasting: 5 Mar 30 13:21:14.963: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:21:14.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9991" for this suite. Mar 30 13:21:36.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:21:37.093: INFO: namespace pod-network-test-9991 deletion completed in 22.124952769s • [SLOW TEST:50.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:21:37.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 30 13:21:37.166: INFO: Waiting up to 5m0s for pod "pod-8fd61513-13f1-4815-9dc6-5c4355386c17" in namespace "emptydir-7658" to be "success or failure" Mar 30 13:21:37.225: INFO: Pod "pod-8fd61513-13f1-4815-9dc6-5c4355386c17": Phase="Pending", Reason="", readiness=false. Elapsed: 58.565475ms Mar 30 13:21:39.273: INFO: Pod "pod-8fd61513-13f1-4815-9dc6-5c4355386c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106642931s Mar 30 13:21:41.277: INFO: Pod "pod-8fd61513-13f1-4815-9dc6-5c4355386c17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110470077s STEP: Saw pod success Mar 30 13:21:41.277: INFO: Pod "pod-8fd61513-13f1-4815-9dc6-5c4355386c17" satisfied condition "success or failure" Mar 30 13:21:41.279: INFO: Trying to get logs from node iruya-worker2 pod pod-8fd61513-13f1-4815-9dc6-5c4355386c17 container test-container: STEP: delete the pod Mar 30 13:21:41.298: INFO: Waiting for pod pod-8fd61513-13f1-4815-9dc6-5c4355386c17 to disappear Mar 30 13:21:41.314: INFO: Pod pod-8fd61513-13f1-4815-9dc6-5c4355386c17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:21:41.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7658" for this suite. Mar 30 13:21:47.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:21:47.451: INFO: namespace emptydir-7658 deletion completed in 6.133141985s • [SLOW TEST:10.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:21:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e28b2a74-4fdd-4cbf-adab-a308f2552cbe in namespace container-probe-7721 Mar 30 13:21:51.540: INFO: Started pod liveness-e28b2a74-4fdd-4cbf-adab-a308f2552cbe in namespace container-probe-7721 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 13:21:51.544: INFO: Initial restart count of pod liveness-e28b2a74-4fdd-4cbf-adab-a308f2552cbe is 0 Mar 30 13:22:13.600: INFO: Restart count of pod container-probe-7721/liveness-e28b2a74-4fdd-4cbf-adab-a308f2552cbe is now 1 (22.056127726s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:22:13.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7721" for this suite. Mar 30 13:22:19.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:22:19.708: INFO: namespace container-probe-7721 deletion completed in 6.091476661s • [SLOW TEST:32.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:22:19.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d3f42af6-7be2-4cdb-95cc-83d205433bca STEP: Creating a pod to test consume secrets Mar 30 13:22:19.790: INFO: Waiting up to 5m0s for pod "pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779" in namespace "secrets-1978" to be "success or failure" Mar 30 13:22:19.800: INFO: Pod "pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100314ms Mar 30 13:22:21.805: INFO: Pod "pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014538079s Mar 30 13:22:23.810: INFO: Pod "pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019533857s STEP: Saw pod success Mar 30 13:22:23.810: INFO: Pod "pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779" satisfied condition "success or failure" Mar 30 13:22:23.814: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779 container secret-volume-test: STEP: delete the pod Mar 30 13:22:23.846: INFO: Waiting for pod pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779 to disappear Mar 30 13:22:23.860: INFO: Pod pod-secrets-2329076e-b8cb-49ae-9e32-9611e1da8779 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:22:23.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1978" for this suite. Mar 30 13:22:29.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:22:29.986: INFO: namespace secrets-1978 deletion completed in 6.123379312s • [SLOW TEST:10.278 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:22:29.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 30 13:22:30.042: INFO: Waiting up to 5m0s for pod "client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344" in namespace "containers-7372" to be "success or failure" Mar 30 13:22:30.046: INFO: Pod "client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344": Phase="Pending", Reason="", readiness=false. Elapsed: 3.742222ms Mar 30 13:22:32.050: INFO: Pod "client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007273794s Mar 30 13:22:34.054: INFO: Pod "client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011576374s STEP: Saw pod success Mar 30 13:22:34.054: INFO: Pod "client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344" satisfied condition "success or failure" Mar 30 13:22:34.057: INFO: Trying to get logs from node iruya-worker pod client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344 container test-container: STEP: delete the pod Mar 30 13:22:34.078: INFO: Waiting for pod client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344 to disappear Mar 30 13:22:34.082: INFO: Pod client-containers-8fc006e2-6c32-451e-b2c5-68e8ab560344 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:22:34.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7372" for this suite. Mar 30 13:22:40.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:22:40.187: INFO: namespace containers-7372 deletion completed in 6.101836292s • [SLOW TEST:10.201 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:22:40.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-5ab60a59-4f0c-4c92-bfd8-e7d832b6e8ec STEP: Creating a pod to test consume configMaps Mar 30 13:22:40.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d" in namespace "configmap-7713" to be "success or failure" Mar 30 13:22:40.286: INFO: Pod "pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.2231ms Mar 30 13:22:42.290: INFO: Pod "pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01310887s Mar 30 13:22:44.295: INFO: Pod "pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017506598s STEP: Saw pod success Mar 30 13:22:44.295: INFO: Pod "pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d" satisfied condition "success or failure" Mar 30 13:22:44.298: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d container configmap-volume-test: STEP: delete the pod Mar 30 13:22:44.353: INFO: Waiting for pod pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d to disappear Mar 30 13:22:44.358: INFO: Pod pod-configmaps-2abc9dc4-8b91-4da1-bc4c-59a924820f7d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:22:44.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7713" for this suite. Mar 30 13:22:50.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:22:50.453: INFO: namespace configmap-7713 deletion completed in 6.092675525s • [SLOW TEST:10.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:22:50.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2048 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2048 STEP: Creating statefulset with conflicting port in namespace statefulset-2048 STEP: Waiting until pod test-pod will start running in namespace statefulset-2048 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2048 Mar 30 13:22:54.592: INFO: Observed stateful pod in namespace: statefulset-2048, name: ss-0, uid: 406d25e3-986b-4be0-b5f3-c5b556c271a4, status phase: Pending. Waiting for statefulset controller to delete. Mar 30 13:23:02.149: INFO: Observed stateful pod in namespace: statefulset-2048, name: ss-0, uid: 406d25e3-986b-4be0-b5f3-c5b556c271a4, status phase: Failed. Waiting for statefulset controller to delete. Mar 30 13:23:02.191: INFO: Observed stateful pod in namespace: statefulset-2048, name: ss-0, uid: 406d25e3-986b-4be0-b5f3-c5b556c271a4, status phase: Failed. Waiting for statefulset controller to delete. Mar 30 13:23:02.198: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2048 STEP: Removing pod with conflicting port in namespace statefulset-2048 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2048 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 30 13:23:06.289: INFO: Deleting all statefulset in ns statefulset-2048 Mar 30 13:23:06.292: INFO: Scaling statefulset ss to 0 Mar 30 13:23:16.311: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 13:23:16.314: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:23:16.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2048" for this suite. Mar 30 13:23:22.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:23:22.440: INFO: namespace statefulset-2048 deletion completed in 6.100818528s • [SLOW TEST:31.986 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:23:22.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 30 13:23:22.498: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:23:27.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5408" for this suite. Mar 30 13:23:33.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:23:33.736: INFO: namespace init-container-5408 deletion completed in 6.106555369s • [SLOW TEST:11.296 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:23:33.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 30 13:23:33.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5713' Mar 30 13:23:34.069: INFO: stderr: "" Mar 30 13:23:34.069: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:23:34.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:34.220: INFO: stderr: "" Mar 30 13:23:34.220: INFO: stdout: "update-demo-nautilus-m4ckf update-demo-nautilus-tdsz8 " Mar 30 13:23:34.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:34.310: INFO: stderr: "" Mar 30 13:23:34.310: INFO: stdout: "" Mar 30 13:23:34.310: INFO: update-demo-nautilus-m4ckf is created but not running Mar 30 13:23:39.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:39.399: INFO: stderr: "" Mar 30 13:23:39.399: INFO: stdout: "update-demo-nautilus-m4ckf update-demo-nautilus-tdsz8 " Mar 30 13:23:39.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:39.492: INFO: stderr: "" Mar 30 13:23:39.492: INFO: stdout: "true" Mar 30 13:23:39.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:39.598: INFO: stderr: "" Mar 30 13:23:39.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:23:39.598: INFO: validating pod update-demo-nautilus-m4ckf Mar 30 13:23:39.606: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:23:39.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:23:39.606: INFO: update-demo-nautilus-m4ckf is verified up and running Mar 30 13:23:39.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdsz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:39.746: INFO: stderr: "" Mar 30 13:23:39.746: INFO: stdout: "true" Mar 30 13:23:39.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdsz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:39.840: INFO: stderr: "" Mar 30 13:23:39.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:23:39.840: INFO: validating pod update-demo-nautilus-tdsz8 Mar 30 13:23:39.869: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:23:39.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:23:39.869: INFO: update-demo-nautilus-tdsz8 is verified up and running STEP: scaling down the replication controller Mar 30 13:23:39.871: INFO: scanned /root for discovery docs: Mar 30 13:23:39.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5713' Mar 30 13:23:40.988: INFO: stderr: "" Mar 30 13:23:40.988: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:23:40.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:41.080: INFO: stderr: "" Mar 30 13:23:41.080: INFO: stdout: "update-demo-nautilus-m4ckf update-demo-nautilus-tdsz8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 30 13:23:46.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:46.184: INFO: stderr: "" Mar 30 13:23:46.184: INFO: stdout: "update-demo-nautilus-m4ckf " Mar 30 13:23:46.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:46.284: INFO: stderr: "" Mar 30 13:23:46.284: INFO: stdout: "true" Mar 30 13:23:46.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:46.376: INFO: stderr: "" Mar 30 13:23:46.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:23:46.377: INFO: validating pod update-demo-nautilus-m4ckf Mar 30 13:23:46.380: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:23:46.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:23:46.380: INFO: update-demo-nautilus-m4ckf is verified up and running STEP: scaling up the replication controller Mar 30 13:23:46.382: INFO: scanned /root for discovery docs: Mar 30 13:23:46.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5713' Mar 30 13:23:47.550: INFO: stderr: "" Mar 30 13:23:47.550: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:23:47.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:47.658: INFO: stderr: "" Mar 30 13:23:47.658: INFO: stdout: "update-demo-nautilus-b2bgt update-demo-nautilus-m4ckf " Mar 30 13:23:47.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2bgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:47.748: INFO: stderr: "" Mar 30 13:23:47.748: INFO: stdout: "" Mar 30 13:23:47.748: INFO: update-demo-nautilus-b2bgt is created but not running Mar 30 13:23:52.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5713' Mar 30 13:23:52.842: INFO: stderr: "" Mar 30 13:23:52.843: INFO: stdout: "update-demo-nautilus-b2bgt update-demo-nautilus-m4ckf " Mar 30 13:23:52.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2bgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:52.926: INFO: stderr: "" Mar 30 13:23:52.926: INFO: stdout: "true" Mar 30 13:23:52.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2bgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:53.018: INFO: stderr: "" Mar 30 13:23:53.018: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:23:53.018: INFO: validating pod update-demo-nautilus-b2bgt Mar 30 13:23:53.022: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:23:53.022: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:23:53.022: INFO: update-demo-nautilus-b2bgt is verified up and running Mar 30 13:23:53.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:53.114: INFO: stderr: "" Mar 30 13:23:53.114: INFO: stdout: "true" Mar 30 13:23:53.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4ckf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5713' Mar 30 13:23:53.201: INFO: stderr: "" Mar 30 13:23:53.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:23:53.201: INFO: validating pod update-demo-nautilus-m4ckf Mar 30 13:23:53.205: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:23:53.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:23:53.205: INFO: update-demo-nautilus-m4ckf is verified up and running STEP: using delete to clean up resources Mar 30 13:23:53.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5713' Mar 30 13:23:53.314: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 13:23:53.314: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 30 13:23:53.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5713' Mar 30 13:23:53.407: INFO: stderr: "No resources found.\n" Mar 30 13:23:53.407: INFO: stdout: "" Mar 30 13:23:53.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:23:53.495: INFO: stderr: "" Mar 30 13:23:53.495: INFO: stdout: "update-demo-nautilus-b2bgt\nupdate-demo-nautilus-m4ckf\n" Mar 30 13:23:53.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5713' Mar 30 13:23:54.097: INFO: stderr: "No resources found.\n" Mar 30 13:23:54.097: INFO: stdout: "" Mar 30 13:23:54.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:23:54.199: INFO: stderr: "" Mar 30 13:23:54.199: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:23:54.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5713" for this suite. Mar 30 13:24:16.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:24:16.298: INFO: namespace kubectl-5713 deletion completed in 22.095097713s • [SLOW TEST:42.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:24:16.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f6e8582e-f57e-4fc6-837d-2de63613b263 STEP: Creating a pod to test consume secrets Mar 30 13:24:16.379: INFO: Waiting up to 5m0s for pod "pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb" in namespace "secrets-9915" to be "success or failure" Mar 30 13:24:16.384: INFO: Pod "pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.833723ms Mar 30 13:24:18.387: INFO: Pod "pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008529723s Mar 30 13:24:20.392: INFO: Pod "pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012713163s STEP: Saw pod success Mar 30 13:24:20.392: INFO: Pod "pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb" satisfied condition "success or failure" Mar 30 13:24:20.394: INFO: Trying to get logs from node iruya-worker pod pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb container secret-volume-test: STEP: delete the pod Mar 30 13:24:20.420: INFO: Waiting for pod pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb to disappear Mar 30 13:24:20.432: INFO: Pod pod-secrets-85a966c0-7c67-4ab1-b84f-fc64736c67eb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:24:20.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9915" for this suite. Mar 30 13:24:26.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:24:26.529: INFO: namespace secrets-9915 deletion completed in 6.093672833s • [SLOW TEST:10.231 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:24:26.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-1012 I0330 13:24:26.595119 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1012, replica count: 1 I0330 13:24:27.645530 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 13:24:28.645764 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 13:24:29.646056 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 13:24:29.767: INFO: Created: latency-svc-wrsjt Mar 30 13:24:29.773: INFO: Got endpoints: latency-svc-wrsjt [27.633841ms] Mar 30 13:24:29.803: INFO: Created: latency-svc-sckrk Mar 30 13:24:29.822: INFO: Got endpoints: latency-svc-sckrk [48.163402ms] Mar 30 13:24:29.887: INFO: Created: latency-svc-px8pw Mar 30 13:24:29.925: INFO: Got endpoints: latency-svc-px8pw [151.200325ms] Mar 30 13:24:29.965: INFO: Created: latency-svc-szwvr Mar 30 13:24:29.978: INFO: Got endpoints: latency-svc-szwvr [204.647806ms] Mar 30 13:24:30.025: INFO: Created: latency-svc-lq4rn Mar 30 13:24:30.071: INFO: Got endpoints: latency-svc-lq4rn [297.288013ms] Mar 30 13:24:30.105: INFO: Created: latency-svc-77z6l Mar 30 13:24:30.162: INFO: Got endpoints: latency-svc-77z6l [388.023003ms] Mar 30 13:24:30.175: INFO: Created: latency-svc-v9w7f Mar 30 13:24:30.189: INFO: Got endpoints: latency-svc-v9w7f [415.075571ms] Mar 30 13:24:30.211: INFO: Created: latency-svc-p42gn Mar 30 13:24:30.235: INFO: Got endpoints: latency-svc-p42gn [461.075033ms] Mar 30 13:24:30.324: INFO: Created: latency-svc-9fwk5 Mar 30 13:24:30.328: INFO: Got endpoints: latency-svc-9fwk5 [554.070109ms] Mar 30 13:24:30.350: INFO: Created: latency-svc-tc26b Mar 30 13:24:30.364: INFO: Got endpoints: latency-svc-tc26b [589.634048ms] Mar 30 13:24:30.398: INFO: Created: latency-svc-mrm4s Mar 30 13:24:30.411: INFO: Got endpoints: latency-svc-mrm4s [637.397417ms] Mar 30 13:24:30.486: INFO: Created: latency-svc-6bw4v Mar 30 13:24:30.490: INFO: Got endpoints: latency-svc-6bw4v [716.187228ms] Mar 30 13:24:30.519: INFO: Created: latency-svc-l85dt Mar 30 13:24:30.532: INFO: Got endpoints: latency-svc-l85dt [757.952462ms] Mar 30 13:24:30.555: INFO: Created: latency-svc-9cjcq Mar 30 13:24:30.577: INFO: Got endpoints: latency-svc-9cjcq [803.160419ms] Mar 30 13:24:30.641: INFO: Created: latency-svc-xgzcv Mar 30 13:24:30.647: INFO: Got endpoints: latency-svc-xgzcv [872.926193ms] Mar 30 13:24:30.667: INFO: Created: latency-svc-dq9m9 Mar 30 13:24:30.683: INFO: Got endpoints: latency-svc-dq9m9 [908.931621ms] Mar 30 13:24:30.705: INFO: Created: latency-svc-n7d9g Mar 30 13:24:30.720: INFO: Got endpoints: latency-svc-n7d9g [897.348657ms] Mar 30 13:24:30.742: INFO: Created: latency-svc-ddq24 Mar 30 13:24:30.808: INFO: Got endpoints: latency-svc-ddq24 [883.485238ms] Mar 30 13:24:30.811: INFO: Created: latency-svc-khwx9 Mar 30 13:24:30.835: INFO: Got endpoints: latency-svc-khwx9 [856.351224ms] Mar 30 13:24:30.865: INFO: Created: latency-svc-nmzq5 Mar 30 13:24:30.882: INFO: Got endpoints: latency-svc-nmzq5 [810.809563ms] Mar 30 13:24:30.902: INFO: Created: latency-svc-5h8dt Mar 30 13:24:30.940: INFO: Got endpoints: latency-svc-5h8dt [777.957003ms] Mar 30 13:24:30.951: INFO: Created: latency-svc-n9smq Mar 30 13:24:30.966: INFO: Got endpoints: latency-svc-n9smq [777.465419ms] Mar 30 13:24:30.986: INFO: Created: latency-svc-cvf88 Mar 30 13:24:30.997: INFO: Got endpoints: latency-svc-cvf88 [762.286569ms] Mar 30 13:24:31.015: INFO: Created: latency-svc-mwv2t Mar 30 13:24:31.027: INFO: Got endpoints: latency-svc-mwv2t [699.369347ms] Mar 30 13:24:31.084: INFO: Created: latency-svc-4rwh2 Mar 30 13:24:31.087: INFO: Got endpoints: latency-svc-4rwh2 [723.309257ms] Mar 30 13:24:31.111: INFO: Created: latency-svc-sz9fx Mar 30 13:24:31.123: INFO: Got endpoints: latency-svc-sz9fx [711.905097ms] Mar 30 13:24:31.155: INFO: Created: latency-svc-qvnrl Mar 30 13:24:31.172: INFO: Got endpoints: latency-svc-qvnrl [681.561576ms] Mar 30 13:24:31.257: INFO: Created: latency-svc-5hgxd Mar 30 13:24:31.279: INFO: Got endpoints: latency-svc-5hgxd [747.387162ms] Mar 30 13:24:31.303: INFO: Created: latency-svc-46g7p Mar 30 13:24:31.316: INFO: Got endpoints: latency-svc-46g7p [739.081517ms] Mar 30 13:24:31.335: INFO: Created: latency-svc-mkwf6 Mar 30 13:24:31.346: INFO: Got endpoints: latency-svc-mkwf6 [699.533904ms] Mar 30 13:24:31.414: INFO: Created: latency-svc-r8pkv Mar 30 13:24:31.419: INFO: Got endpoints: latency-svc-r8pkv [735.877405ms] Mar 30 13:24:31.447: INFO: Created: latency-svc-nsnlp Mar 30 13:24:31.477: INFO: Got endpoints: latency-svc-nsnlp [757.219944ms] Mar 30 13:24:31.507: INFO: Created: latency-svc-jmbxc Mar 30 13:24:31.581: INFO: Got endpoints: latency-svc-jmbxc [772.135835ms] Mar 30 13:24:31.584: INFO: Created: latency-svc-ss8k4 Mar 30 13:24:31.588: INFO: Got endpoints: latency-svc-ss8k4 [752.75066ms] Mar 30 13:24:31.610: INFO: Created: latency-svc-4rx8k Mar 30 13:24:31.624: INFO: Got endpoints: latency-svc-4rx8k [741.929091ms] Mar 30 13:24:31.645: INFO: Created: latency-svc-wqknk Mar 30 13:24:31.675: INFO: Got endpoints: latency-svc-wqknk [735.169949ms] Mar 30 13:24:31.730: INFO: Created: latency-svc-q9b9h Mar 30 13:24:31.738: INFO: Got endpoints: latency-svc-q9b9h [771.70443ms] Mar 30 13:24:31.761: INFO: Created: latency-svc-jr59x Mar 30 13:24:31.774: INFO: Got endpoints: latency-svc-jr59x [777.124616ms] Mar 30 13:24:31.796: INFO: Created: latency-svc-vtm7c Mar 30 13:24:31.842: INFO: Got endpoints: latency-svc-vtm7c [814.700896ms] Mar 30 13:24:31.868: INFO: Created: latency-svc-jrbmt Mar 30 13:24:31.903: INFO: Got endpoints: latency-svc-jrbmt [815.663582ms] Mar 30 13:24:31.934: INFO: Created: latency-svc-99d9k Mar 30 13:24:31.950: INFO: Got endpoints: latency-svc-99d9k [826.146296ms] Mar 30 13:24:32.006: INFO: Created: latency-svc-qklr2 Mar 30 13:24:32.010: INFO: Got endpoints: latency-svc-qklr2 [838.076598ms] Mar 30 13:24:32.036: INFO: Created: latency-svc-xdd2s Mar 30 13:24:32.064: INFO: Got endpoints: latency-svc-xdd2s [784.9874ms] Mar 30 13:24:32.095: INFO: Created: latency-svc-g84rs Mar 30 13:24:32.149: INFO: Got endpoints: latency-svc-g84rs [833.035013ms] Mar 30 13:24:32.170: INFO: Created: latency-svc-jxzl5 Mar 30 13:24:32.186: INFO: Got endpoints: latency-svc-jxzl5 [840.048392ms] Mar 30 13:24:32.216: INFO: Created: latency-svc-mrj9x Mar 30 13:24:32.227: INFO: Got endpoints: latency-svc-mrj9x [807.998863ms] Mar 30 13:24:32.317: INFO: Created: latency-svc-xz426 Mar 30 13:24:32.320: INFO: Got endpoints: latency-svc-xz426 [842.925532ms] Mar 30 13:24:32.353: INFO: Created: latency-svc-5d2wk Mar 30 13:24:32.365: INFO: Got endpoints: latency-svc-5d2wk [784.350826ms] Mar 30 13:24:32.386: INFO: Created: latency-svc-485ch Mar 30 13:24:32.402: INFO: Got endpoints: latency-svc-485ch [813.935975ms] Mar 30 13:24:32.461: INFO: Created: latency-svc-hvkmv Mar 30 13:24:32.464: INFO: Got endpoints: latency-svc-hvkmv [840.036128ms] Mar 30 13:24:32.491: INFO: Created: latency-svc-dz6b7 Mar 30 13:24:32.515: INFO: Got endpoints: latency-svc-dz6b7 [839.807268ms] Mar 30 13:24:32.545: INFO: Created: latency-svc-ntg85 Mar 30 13:24:32.558: INFO: Got endpoints: latency-svc-ntg85 [820.232599ms] Mar 30 13:24:32.605: INFO: Created: latency-svc-6ks7n Mar 30 13:24:32.613: INFO: Got endpoints: latency-svc-6ks7n [838.054627ms] Mar 30 13:24:32.630: INFO: Created: latency-svc-ckvsb Mar 30 13:24:32.643: INFO: Got endpoints: latency-svc-ckvsb [800.704818ms] Mar 30 13:24:32.665: INFO: Created: latency-svc-xvqn9 Mar 30 13:24:32.689: INFO: Got endpoints: latency-svc-xvqn9 [785.929532ms] Mar 30 13:24:32.738: INFO: Created: latency-svc-ppd74 Mar 30 13:24:32.742: INFO: Got endpoints: latency-svc-ppd74 [791.979582ms] Mar 30 13:24:32.767: INFO: Created: latency-svc-x6gcc Mar 30 13:24:32.788: INFO: Got endpoints: latency-svc-x6gcc [778.348853ms] Mar 30 13:24:32.817: INFO: Created: latency-svc-6prvk Mar 30 13:24:32.830: INFO: Got endpoints: latency-svc-6prvk [765.638825ms] Mar 30 13:24:32.886: INFO: Created: latency-svc-b2gqq Mar 30 13:24:32.890: INFO: Got endpoints: latency-svc-b2gqq [741.244831ms] Mar 30 13:24:32.917: INFO: Created: latency-svc-mtp5j Mar 30 13:24:32.933: INFO: Got endpoints: latency-svc-mtp5j [746.532362ms] Mar 30 13:24:32.954: INFO: Created: latency-svc-9c8hw Mar 30 13:24:32.963: INFO: Got endpoints: latency-svc-9c8hw [735.810201ms] Mar 30 13:24:32.983: INFO: Created: latency-svc-lkd2t Mar 30 13:24:33.042: INFO: Got endpoints: latency-svc-lkd2t [721.826997ms] Mar 30 13:24:33.044: INFO: Created: latency-svc-stl79 Mar 30 13:24:33.048: INFO: Got endpoints: latency-svc-stl79 [682.716746ms] Mar 30 13:24:33.105: INFO: Created: latency-svc-dhbkx Mar 30 13:24:33.139: INFO: Got endpoints: latency-svc-dhbkx [736.928078ms] Mar 30 13:24:33.212: INFO: Created: latency-svc-txgsr Mar 30 13:24:33.241: INFO: Got endpoints: latency-svc-txgsr [776.811401ms] Mar 30 13:24:33.271: INFO: Created: latency-svc-crv5r Mar 30 13:24:33.353: INFO: Got endpoints: latency-svc-crv5r [838.056882ms] Mar 30 13:24:33.356: INFO: Created: latency-svc-x4n55 Mar 30 13:24:33.366: INFO: Got endpoints: latency-svc-x4n55 [807.909779ms] Mar 30 13:24:33.410: INFO: Created: latency-svc-ftpsl Mar 30 13:24:33.447: INFO: Got endpoints: latency-svc-ftpsl [834.860562ms] Mar 30 13:24:33.501: INFO: Created: latency-svc-b2dp9 Mar 30 13:24:33.518: INFO: Got endpoints: latency-svc-b2dp9 [874.64777ms] Mar 30 13:24:33.537: INFO: Created: latency-svc-drsdf Mar 30 13:24:33.554: INFO: Got endpoints: latency-svc-drsdf [865.201725ms] Mar 30 13:24:33.572: INFO: Created: latency-svc-4g7kh Mar 30 13:24:33.610: INFO: Got endpoints: latency-svc-4g7kh [868.579626ms] Mar 30 13:24:33.625: INFO: Created: latency-svc-6pxtl Mar 30 13:24:33.639: INFO: Got endpoints: latency-svc-6pxtl [850.229394ms] Mar 30 13:24:33.681: INFO: Created: latency-svc-9rltx Mar 30 13:24:33.692: INFO: Got endpoints: latency-svc-9rltx [862.181678ms] Mar 30 13:24:33.749: INFO: Created: latency-svc-jf5fc Mar 30 13:24:33.753: INFO: Got endpoints: latency-svc-jf5fc [862.082276ms] Mar 30 13:24:33.777: INFO: Created: latency-svc-5bvgc Mar 30 13:24:33.788: INFO: Got endpoints: latency-svc-5bvgc [855.305986ms] Mar 30 13:24:33.811: INFO: Created: latency-svc-nwbdb Mar 30 13:24:33.835: INFO: Got endpoints: latency-svc-nwbdb [872.12735ms] Mar 30 13:24:33.899: INFO: Created: latency-svc-jjgq9 Mar 30 13:24:33.902: INFO: Got endpoints: latency-svc-jjgq9 [860.216839ms] Mar 30 13:24:33.926: INFO: Created: latency-svc-dhsrg Mar 30 13:24:33.939: INFO: Got endpoints: latency-svc-dhsrg [891.556345ms] Mar 30 13:24:33.980: INFO: Created: latency-svc-b48w6 Mar 30 13:24:34.065: INFO: Got endpoints: latency-svc-b48w6 [926.659558ms] Mar 30 13:24:34.068: INFO: Created: latency-svc-grcms Mar 30 13:24:34.093: INFO: Got endpoints: latency-svc-grcms [852.290999ms] Mar 30 13:24:34.118: INFO: Created: latency-svc-jn69m Mar 30 13:24:34.128: INFO: Got endpoints: latency-svc-jn69m [774.684556ms] Mar 30 13:24:34.203: INFO: Created: latency-svc-k59bk Mar 30 13:24:34.210: INFO: Got endpoints: latency-svc-k59bk [843.997281ms] Mar 30 13:24:34.233: INFO: Created: latency-svc-r5jkd Mar 30 13:24:34.247: INFO: Got endpoints: latency-svc-r5jkd [799.408116ms] Mar 30 13:24:34.270: INFO: Created: latency-svc-phncz Mar 30 13:24:34.284: INFO: Got endpoints: latency-svc-phncz [766.32629ms] Mar 30 13:24:34.330: INFO: Created: latency-svc-cc5t4 Mar 30 13:24:34.332: INFO: Got endpoints: latency-svc-cc5t4 [778.150758ms] Mar 30 13:24:34.358: INFO: Created: latency-svc-dmqfm Mar 30 13:24:34.368: INFO: Got endpoints: latency-svc-dmqfm [757.407625ms] Mar 30 13:24:34.400: INFO: Created: latency-svc-w5grl Mar 30 13:24:34.410: INFO: Got endpoints: latency-svc-w5grl [771.573651ms] Mar 30 13:24:34.455: INFO: Created: latency-svc-67bp2 Mar 30 13:24:34.458: INFO: Got endpoints: latency-svc-67bp2 [766.113821ms] Mar 30 13:24:34.485: INFO: Created: latency-svc-sstx5 Mar 30 13:24:34.507: INFO: Got endpoints: latency-svc-sstx5 [753.836343ms] Mar 30 13:24:34.531: INFO: Created: latency-svc-f5f7z Mar 30 13:24:34.543: INFO: Got endpoints: latency-svc-f5f7z [754.246585ms] Mar 30 13:24:34.598: INFO: Created: latency-svc-8tw8k Mar 30 13:24:34.621: INFO: Got endpoints: latency-svc-8tw8k [786.438937ms] Mar 30 13:24:34.622: INFO: Created: latency-svc-mgm6v Mar 30 13:24:34.646: INFO: Got endpoints: latency-svc-mgm6v [744.388273ms] Mar 30 13:24:34.677: INFO: Created: latency-svc-bjkx5 Mar 30 13:24:34.694: INFO: Got endpoints: latency-svc-bjkx5 [754.382115ms] Mar 30 13:24:34.736: INFO: Created: latency-svc-htrn4 Mar 30 13:24:34.742: INFO: Got endpoints: latency-svc-htrn4 [676.339373ms] Mar 30 13:24:34.761: INFO: Created: latency-svc-sfhlh Mar 30 13:24:34.772: INFO: Got endpoints: latency-svc-sfhlh [679.150175ms] Mar 30 13:24:34.790: INFO: Created: latency-svc-9tpl6 Mar 30 13:24:34.803: INFO: Got endpoints: latency-svc-9tpl6 [675.004445ms] Mar 30 13:24:34.827: INFO: Created: latency-svc-gqcf4 Mar 30 13:24:34.862: INFO: Got endpoints: latency-svc-gqcf4 [651.16983ms] Mar 30 13:24:34.886: INFO: Created: latency-svc-s9ptq Mar 30 13:24:34.899: INFO: Got endpoints: latency-svc-s9ptq [652.178154ms] Mar 30 13:24:34.917: INFO: Created: latency-svc-zmb6t Mar 30 13:24:34.930: INFO: Got endpoints: latency-svc-zmb6t [645.494303ms] Mar 30 13:24:34.953: INFO: Created: latency-svc-tqtdq Mar 30 13:24:34.960: INFO: Got endpoints: latency-svc-tqtdq [627.387714ms] Mar 30 13:24:35.006: INFO: Created: latency-svc-5c47t Mar 30 13:24:35.018: INFO: Got endpoints: latency-svc-5c47t [650.40589ms] Mar 30 13:24:35.049: INFO: Created: latency-svc-2zc25 Mar 30 13:24:35.062: INFO: Got endpoints: latency-svc-2zc25 [652.113863ms] Mar 30 13:24:35.097: INFO: Created: latency-svc-kdf9l Mar 30 13:24:35.137: INFO: Got endpoints: latency-svc-kdf9l [678.440982ms] Mar 30 13:24:35.149: INFO: Created: latency-svc-57wn7 Mar 30 13:24:35.171: INFO: Got endpoints: latency-svc-57wn7 [664.331047ms] Mar 30 13:24:35.275: INFO: Created: latency-svc-pxrbn Mar 30 13:24:35.288: INFO: Got endpoints: latency-svc-pxrbn [745.485485ms] Mar 30 13:24:35.330: INFO: Created: latency-svc-b2mx6 Mar 30 13:24:35.345: INFO: Got endpoints: latency-svc-b2mx6 [723.985741ms] Mar 30 13:24:35.413: INFO: Created: latency-svc-nt2bc Mar 30 13:24:35.416: INFO: Got endpoints: latency-svc-nt2bc [769.087368ms] Mar 30 13:24:35.445: INFO: Created: latency-svc-whkdn Mar 30 13:24:35.460: INFO: Got endpoints: latency-svc-whkdn [766.034551ms] Mar 30 13:24:35.487: INFO: Created: latency-svc-rgmr5 Mar 30 13:24:35.502: INFO: Got endpoints: latency-svc-rgmr5 [760.23009ms] Mar 30 13:24:35.551: INFO: Created: latency-svc-76bb7 Mar 30 13:24:35.554: INFO: Got endpoints: latency-svc-76bb7 [781.128337ms] Mar 30 13:24:35.575: INFO: Created: latency-svc-7v7nt Mar 30 13:24:35.587: INFO: Got endpoints: latency-svc-7v7nt [783.840239ms] Mar 30 13:24:35.605: INFO: Created: latency-svc-frpw8 Mar 30 13:24:35.617: INFO: Got endpoints: latency-svc-frpw8 [755.673646ms] Mar 30 13:24:35.637: INFO: Created: latency-svc-sxtw8 Mar 30 13:24:35.700: INFO: Got endpoints: latency-svc-sxtw8 [801.13513ms] Mar 30 13:24:35.708: INFO: Created: latency-svc-42b74 Mar 30 13:24:35.714: INFO: Got endpoints: latency-svc-42b74 [783.973928ms] Mar 30 13:24:35.731: INFO: Created: latency-svc-hhk7x Mar 30 13:24:35.744: INFO: Got endpoints: latency-svc-hhk7x [784.306362ms] Mar 30 13:24:35.761: INFO: Created: latency-svc-dzx4z Mar 30 13:24:35.774: INFO: Got endpoints: latency-svc-dzx4z [756.090497ms] Mar 30 13:24:35.791: INFO: Created: latency-svc-xfm8s Mar 30 13:24:35.820: INFO: Got endpoints: latency-svc-xfm8s [757.270145ms] Mar 30 13:24:35.835: INFO: Created: latency-svc-7htrl Mar 30 13:24:35.852: INFO: Got endpoints: latency-svc-7htrl [714.667137ms] Mar 30 13:24:35.889: INFO: Created: latency-svc-zg5zr Mar 30 13:24:35.912: INFO: Got endpoints: latency-svc-zg5zr [741.215952ms] Mar 30 13:24:35.958: INFO: Created: latency-svc-rxlv8 Mar 30 13:24:35.971: INFO: Got endpoints: latency-svc-rxlv8 [682.419179ms] Mar 30 13:24:36.001: INFO: Created: latency-svc-4mt2q Mar 30 13:24:36.014: INFO: Got endpoints: latency-svc-4mt2q [668.836484ms] Mar 30 13:24:36.032: INFO: Created: latency-svc-n247n Mar 30 13:24:36.045: INFO: Got endpoints: latency-svc-n247n [628.888274ms] Mar 30 13:24:36.087: INFO: Created: latency-svc-7xc64 Mar 30 13:24:36.135: INFO: Got endpoints: latency-svc-7xc64 [674.659454ms] Mar 30 13:24:36.175: INFO: Created: latency-svc-rd9s6 Mar 30 13:24:36.216: INFO: Got endpoints: latency-svc-rd9s6 [713.602448ms] Mar 30 13:24:36.229: INFO: Created: latency-svc-sbj2d Mar 30 13:24:36.244: INFO: Got endpoints: latency-svc-sbj2d [690.021744ms] Mar 30 13:24:36.274: INFO: Created: latency-svc-mkkr2 Mar 30 13:24:36.286: INFO: Got endpoints: latency-svc-mkkr2 [699.495536ms] Mar 30 13:24:36.309: INFO: Created: latency-svc-q2bwd Mar 30 13:24:36.358: INFO: Got endpoints: latency-svc-q2bwd [741.015635ms] Mar 30 13:24:36.360: INFO: Created: latency-svc-5lgxq Mar 30 13:24:36.370: INFO: Got endpoints: latency-svc-5lgxq [669.635792ms] Mar 30 13:24:36.397: INFO: Created: latency-svc-h4t8s Mar 30 13:24:36.433: INFO: Got endpoints: latency-svc-h4t8s [718.928862ms] Mar 30 13:24:36.491: INFO: Created: latency-svc-rtb4j Mar 30 13:24:36.497: INFO: Got endpoints: latency-svc-rtb4j [753.373028ms] Mar 30 13:24:36.525: INFO: Created: latency-svc-hnlwv Mar 30 13:24:36.533: INFO: Got endpoints: latency-svc-hnlwv [758.783668ms] Mar 30 13:24:36.561: INFO: Created: latency-svc-ctzlp Mar 30 13:24:36.576: INFO: Got endpoints: latency-svc-ctzlp [756.189507ms] Mar 30 13:24:36.629: INFO: Created: latency-svc-t5c56 Mar 30 13:24:36.635: INFO: Got endpoints: latency-svc-t5c56 [783.764542ms] Mar 30 13:24:36.655: INFO: Created: latency-svc-6kbhw Mar 30 13:24:36.672: INFO: Got endpoints: latency-svc-6kbhw [759.7988ms] Mar 30 13:24:36.693: INFO: Created: latency-svc-5jslg Mar 30 13:24:36.708: INFO: Got endpoints: latency-svc-5jslg [737.580559ms] Mar 30 13:24:36.778: INFO: Created: latency-svc-kcbnd Mar 30 13:24:36.793: INFO: Got endpoints: latency-svc-kcbnd [778.764117ms] Mar 30 13:24:36.794: INFO: Created: latency-svc-pdgw5 Mar 30 13:24:36.817: INFO: Got endpoints: latency-svc-pdgw5 [772.374082ms] Mar 30 13:24:36.835: INFO: Created: latency-svc-x6t5p Mar 30 13:24:36.847: INFO: Got endpoints: latency-svc-x6t5p [712.301878ms] Mar 30 13:24:36.878: INFO: Created: latency-svc-r4j66 Mar 30 13:24:36.915: INFO: Got endpoints: latency-svc-r4j66 [699.759527ms] Mar 30 13:24:36.932: INFO: Created: latency-svc-wt99n Mar 30 13:24:36.944: INFO: Got endpoints: latency-svc-wt99n [699.832115ms] Mar 30 13:24:36.964: INFO: Created: latency-svc-sv8lf Mar 30 13:24:36.974: INFO: Got endpoints: latency-svc-sv8lf [687.967919ms] Mar 30 13:24:36.998: INFO: Created: latency-svc-z92cd Mar 30 13:24:37.010: INFO: Got endpoints: latency-svc-z92cd [652.016155ms] Mar 30 13:24:37.072: INFO: Created: latency-svc-jlfz2 Mar 30 13:24:37.094: INFO: Got endpoints: latency-svc-jlfz2 [724.11179ms] Mar 30 13:24:37.125: INFO: Created: latency-svc-ntcdf Mar 30 13:24:37.137: INFO: Got endpoints: latency-svc-ntcdf [704.386826ms] Mar 30 13:24:37.161: INFO: Created: latency-svc-q55tf Mar 30 13:24:37.227: INFO: Got endpoints: latency-svc-q55tf [729.270133ms] Mar 30 13:24:37.229: INFO: Created: latency-svc-jpwc9 Mar 30 13:24:37.233: INFO: Got endpoints: latency-svc-jpwc9 [700.046396ms] Mar 30 13:24:37.305: INFO: Created: latency-svc-k26hp Mar 30 13:24:37.317: INFO: Got endpoints: latency-svc-k26hp [741.431576ms] Mar 30 13:24:37.363: INFO: Created: latency-svc-lrhzm Mar 30 13:24:37.378: INFO: Got endpoints: latency-svc-lrhzm [742.351128ms] Mar 30 13:24:37.399: INFO: Created: latency-svc-7c4lj Mar 30 13:24:37.414: INFO: Got endpoints: latency-svc-7c4lj [741.722755ms] Mar 30 13:24:37.442: INFO: Created: latency-svc-g4wmz Mar 30 13:24:37.526: INFO: Got endpoints: latency-svc-g4wmz [818.040147ms] Mar 30 13:24:37.550: INFO: Created: latency-svc-cn59n Mar 30 13:24:37.565: INFO: Got endpoints: latency-svc-cn59n [771.781074ms] Mar 30 13:24:37.585: INFO: Created: latency-svc-s5wqz Mar 30 13:24:37.601: INFO: Got endpoints: latency-svc-s5wqz [783.909787ms] Mar 30 13:24:37.621: INFO: Created: latency-svc-z5b7v Mar 30 13:24:37.682: INFO: Got endpoints: latency-svc-z5b7v [835.181584ms] Mar 30 13:24:37.684: INFO: Created: latency-svc-qcfgs Mar 30 13:24:37.691: INFO: Got endpoints: latency-svc-qcfgs [775.944097ms] Mar 30 13:24:37.712: INFO: Created: latency-svc-sxvxt Mar 30 13:24:37.728: INFO: Got endpoints: latency-svc-sxvxt [784.393131ms] Mar 30 13:24:37.827: INFO: Created: latency-svc-hmntd Mar 30 13:24:37.830: INFO: Got endpoints: latency-svc-hmntd [855.6358ms] Mar 30 13:24:37.857: INFO: Created: latency-svc-qr2hz Mar 30 13:24:37.871: INFO: Got endpoints: latency-svc-qr2hz [860.798528ms] Mar 30 13:24:37.899: INFO: Created: latency-svc-k7g89 Mar 30 13:24:37.914: INFO: Got endpoints: latency-svc-k7g89 [819.644283ms] Mar 30 13:24:37.982: INFO: Created: latency-svc-cs6j7 Mar 30 13:24:38.011: INFO: Created: latency-svc-jrx6f Mar 30 13:24:38.012: INFO: Got endpoints: latency-svc-cs6j7 [874.544989ms] Mar 30 13:24:38.029: INFO: Got endpoints: latency-svc-jrx6f [802.319059ms] Mar 30 13:24:38.047: INFO: Created: latency-svc-8tj76 Mar 30 13:24:38.071: INFO: Got endpoints: latency-svc-8tj76 [837.473833ms] Mar 30 13:24:38.138: INFO: Created: latency-svc-59g7v Mar 30 13:24:38.173: INFO: Got endpoints: latency-svc-59g7v [855.530832ms] Mar 30 13:24:38.203: INFO: Created: latency-svc-6jckl Mar 30 13:24:38.215: INFO: Got endpoints: latency-svc-6jckl [836.89618ms] Mar 30 13:24:38.234: INFO: Created: latency-svc-wx5c7 Mar 30 13:24:38.275: INFO: Got endpoints: latency-svc-wx5c7 [860.67263ms] Mar 30 13:24:38.325: INFO: Created: latency-svc-79wpk Mar 30 13:24:38.338: INFO: Got endpoints: latency-svc-79wpk [811.050714ms] Mar 30 13:24:38.354: INFO: Created: latency-svc-4kd6b Mar 30 13:24:38.401: INFO: Got endpoints: latency-svc-4kd6b [835.653797ms] Mar 30 13:24:38.419: INFO: Created: latency-svc-brd4m Mar 30 13:24:38.432: INFO: Got endpoints: latency-svc-brd4m [831.297222ms] Mar 30 13:24:38.450: INFO: Created: latency-svc-76mls Mar 30 13:24:38.463: INFO: Got endpoints: latency-svc-76mls [780.316877ms] Mar 30 13:24:38.480: INFO: Created: latency-svc-xz6hl Mar 30 13:24:38.493: INFO: Got endpoints: latency-svc-xz6hl [801.368342ms] Mar 30 13:24:38.533: INFO: Created: latency-svc-9sdfm Mar 30 13:24:38.535: INFO: Got endpoints: latency-svc-9sdfm [807.123254ms] Mar 30 13:24:38.581: INFO: Created: latency-svc-8jg2h Mar 30 13:24:38.629: INFO: Got endpoints: latency-svc-8jg2h [799.050397ms] Mar 30 13:24:38.682: INFO: Created: latency-svc-vsckh Mar 30 13:24:38.686: INFO: Got endpoints: latency-svc-vsckh [814.198033ms] Mar 30 13:24:38.708: INFO: Created: latency-svc-pxm4b Mar 30 13:24:38.734: INFO: Got endpoints: latency-svc-pxm4b [820.466981ms] Mar 30 13:24:38.757: INFO: Created: latency-svc-h89mh Mar 30 13:24:38.770: INFO: Got endpoints: latency-svc-h89mh [758.437539ms] Mar 30 13:24:38.821: INFO: Created: latency-svc-lmxtp Mar 30 13:24:38.825: INFO: Got endpoints: latency-svc-lmxtp [795.916329ms] Mar 30 13:24:38.845: INFO: Created: latency-svc-qw6p8 Mar 30 13:24:38.861: INFO: Got endpoints: latency-svc-qw6p8 [790.064394ms] Mar 30 13:24:38.881: INFO: Created: latency-svc-ld8g8 Mar 30 13:24:38.903: INFO: Got endpoints: latency-svc-ld8g8 [729.965862ms] Mar 30 13:24:38.958: INFO: Created: latency-svc-fjg59 Mar 30 13:24:38.966: INFO: Got endpoints: latency-svc-fjg59 [751.409236ms] Mar 30 13:24:38.985: INFO: Created: latency-svc-gfpll Mar 30 13:24:38.994: INFO: Got endpoints: latency-svc-gfpll [718.916579ms] Mar 30 13:24:39.019: INFO: Created: latency-svc-6hxmw Mar 30 13:24:39.036: INFO: Got endpoints: latency-svc-6hxmw [698.802968ms] Mar 30 13:24:39.114: INFO: Created: latency-svc-r2s2v Mar 30 13:24:39.116: INFO: Got endpoints: latency-svc-r2s2v [715.314133ms] Mar 30 13:24:39.140: INFO: Created: latency-svc-mck8g Mar 30 13:24:39.151: INFO: Got endpoints: latency-svc-mck8g [718.257906ms] Mar 30 13:24:39.177: INFO: Created: latency-svc-hm6rt Mar 30 13:24:39.187: INFO: Got endpoints: latency-svc-hm6rt [724.585602ms] Mar 30 13:24:39.240: INFO: Created: latency-svc-58hhv Mar 30 13:24:39.266: INFO: Got endpoints: latency-svc-58hhv [772.615392ms] Mar 30 13:24:39.268: INFO: Created: latency-svc-dlbtt Mar 30 13:24:39.296: INFO: Got endpoints: latency-svc-dlbtt [761.033461ms] Mar 30 13:24:39.339: INFO: Created: latency-svc-l6lqc Mar 30 13:24:39.376: INFO: Got endpoints: latency-svc-l6lqc [747.338993ms] Mar 30 13:24:39.387: INFO: Created: latency-svc-pxk8z Mar 30 13:24:39.398: INFO: Got endpoints: latency-svc-pxk8z [712.408455ms] Mar 30 13:24:39.421: INFO: Created: latency-svc-f29rd Mar 30 13:24:39.463: INFO: Got endpoints: latency-svc-f29rd [728.346961ms] Mar 30 13:24:39.521: INFO: Created: latency-svc-8lb5l Mar 30 13:24:39.525: INFO: Got endpoints: latency-svc-8lb5l [754.672611ms] Mar 30 13:24:39.549: INFO: Created: latency-svc-xvhxf Mar 30 13:24:39.562: INFO: Got endpoints: latency-svc-xvhxf [736.502108ms] Mar 30 13:24:39.585: INFO: Created: latency-svc-s4sz4 Mar 30 13:24:39.597: INFO: Got endpoints: latency-svc-s4sz4 [736.489189ms] Mar 30 13:24:39.620: INFO: Created: latency-svc-j8m7r Mar 30 13:24:39.682: INFO: Got endpoints: latency-svc-j8m7r [778.837748ms] Mar 30 13:24:39.685: INFO: Created: latency-svc-nt65l Mar 30 13:24:39.688: INFO: Got endpoints: latency-svc-nt65l [721.279879ms] Mar 30 13:24:39.710: INFO: Created: latency-svc-8cdjg Mar 30 13:24:39.724: INFO: Got endpoints: latency-svc-8cdjg [730.503591ms] Mar 30 13:24:39.746: INFO: Created: latency-svc-m69fd Mar 30 13:24:39.760: INFO: Got endpoints: latency-svc-m69fd [723.851536ms] Mar 30 13:24:39.814: INFO: Created: latency-svc-nzrrn Mar 30 13:24:39.817: INFO: Got endpoints: latency-svc-nzrrn [700.859389ms] Mar 30 13:24:39.841: INFO: Created: latency-svc-dm2qc Mar 30 13:24:39.857: INFO: Got endpoints: latency-svc-dm2qc [706.687281ms] Mar 30 13:24:39.885: INFO: Created: latency-svc-rzkhz Mar 30 13:24:39.951: INFO: Got endpoints: latency-svc-rzkhz [764.092027ms] Mar 30 13:24:39.969: INFO: Created: latency-svc-ssqdw Mar 30 13:24:39.990: INFO: Got endpoints: latency-svc-ssqdw [724.213158ms] Mar 30 13:24:40.023: INFO: Created: latency-svc-pjtl9 Mar 30 13:24:40.044: INFO: Got endpoints: latency-svc-pjtl9 [747.955877ms] Mar 30 13:24:40.107: INFO: Created: latency-svc-5fxk8 Mar 30 13:24:40.110: INFO: Got endpoints: latency-svc-5fxk8 [733.70359ms] Mar 30 13:24:40.110: INFO: Latencies: [48.163402ms 151.200325ms 204.647806ms 297.288013ms 388.023003ms 415.075571ms 461.075033ms 554.070109ms 589.634048ms 627.387714ms 628.888274ms 637.397417ms 645.494303ms 650.40589ms 651.16983ms 652.016155ms 652.113863ms 652.178154ms 664.331047ms 668.836484ms 669.635792ms 674.659454ms 675.004445ms 676.339373ms 678.440982ms 679.150175ms 681.561576ms 682.419179ms 682.716746ms 687.967919ms 690.021744ms 698.802968ms 699.369347ms 699.495536ms 699.533904ms 699.759527ms 699.832115ms 700.046396ms 700.859389ms 704.386826ms 706.687281ms 711.905097ms 712.301878ms 712.408455ms 713.602448ms 714.667137ms 715.314133ms 716.187228ms 718.257906ms 718.916579ms 718.928862ms 721.279879ms 721.826997ms 723.309257ms 723.851536ms 723.985741ms 724.11179ms 724.213158ms 724.585602ms 728.346961ms 729.270133ms 729.965862ms 730.503591ms 733.70359ms 735.169949ms 735.810201ms 735.877405ms 736.489189ms 736.502108ms 736.928078ms 737.580559ms 739.081517ms 741.015635ms 741.215952ms 741.244831ms 741.431576ms 741.722755ms 741.929091ms 742.351128ms 744.388273ms 745.485485ms 746.532362ms 747.338993ms 747.387162ms 747.955877ms 751.409236ms 752.75066ms 753.373028ms 753.836343ms 754.246585ms 754.382115ms 754.672611ms 755.673646ms 756.090497ms 756.189507ms 757.219944ms 757.270145ms 757.407625ms 757.952462ms 758.437539ms 758.783668ms 759.7988ms 760.23009ms 761.033461ms 762.286569ms 764.092027ms 765.638825ms 766.034551ms 766.113821ms 766.32629ms 769.087368ms 771.573651ms 771.70443ms 771.781074ms 772.135835ms 772.374082ms 772.615392ms 774.684556ms 775.944097ms 776.811401ms 777.124616ms 777.465419ms 777.957003ms 778.150758ms 778.348853ms 778.764117ms 778.837748ms 780.316877ms 781.128337ms 783.764542ms 783.840239ms 783.909787ms 783.973928ms 784.306362ms 784.350826ms 784.393131ms 784.9874ms 785.929532ms 786.438937ms 790.064394ms 791.979582ms 795.916329ms 799.050397ms 799.408116ms 800.704818ms 801.13513ms 801.368342ms 802.319059ms 803.160419ms 807.123254ms 807.909779ms 807.998863ms 810.809563ms 811.050714ms 813.935975ms 814.198033ms 814.700896ms 815.663582ms 818.040147ms 819.644283ms 820.232599ms 820.466981ms 826.146296ms 831.297222ms 833.035013ms 834.860562ms 835.181584ms 835.653797ms 836.89618ms 837.473833ms 838.054627ms 838.056882ms 838.076598ms 839.807268ms 840.036128ms 840.048392ms 842.925532ms 843.997281ms 850.229394ms 852.290999ms 855.305986ms 855.530832ms 855.6358ms 856.351224ms 860.216839ms 860.67263ms 860.798528ms 862.082276ms 862.181678ms 865.201725ms 868.579626ms 872.12735ms 872.926193ms 874.544989ms 874.64777ms 883.485238ms 891.556345ms 897.348657ms 908.931621ms 926.659558ms] Mar 30 13:24:40.110: INFO: 50 %ile: 758.783668ms Mar 30 13:24:40.110: INFO: 90 %ile: 855.305986ms Mar 30 13:24:40.110: INFO: 99 %ile: 908.931621ms Mar 30 13:24:40.110: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:24:40.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1012" for this suite. Mar 30 13:25:02.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:25:02.192: INFO: namespace svc-latency-1012 deletion completed in 22.07534495s • [SLOW TEST:35.663 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:25:02.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-a746b3d9-7da3-4809-b3e5-109ecefe740e in namespace container-probe-4289 Mar 30 13:25:06.278: INFO: Started pod test-webserver-a746b3d9-7da3-4809-b3e5-109ecefe740e in namespace container-probe-4289 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 13:25:06.281: INFO: Initial restart count of pod test-webserver-a746b3d9-7da3-4809-b3e5-109ecefe740e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:29:06.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4289" for this suite. Mar 30 13:29:13.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:29:13.106: INFO: namespace container-probe-4289 deletion completed in 6.09587981s • [SLOW TEST:250.914 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:29:13.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 30 13:29:13.214: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:13.224: INFO: Number of nodes with available pods: 0 Mar 30 13:29:13.225: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:29:14.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:14.234: INFO: Number of nodes with available pods: 0 Mar 30 13:29:14.234: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:29:15.334: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:15.338: INFO: Number of nodes with available pods: 0 Mar 30 13:29:15.338: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:29:16.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:16.234: INFO: Number of nodes with available pods: 0 Mar 30 13:29:16.234: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:29:17.252: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:17.255: INFO: Number of nodes with available pods: 2 Mar 30 13:29:17.255: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 30 13:29:17.328: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:29:17.333: INFO: Number of nodes with available pods: 2 Mar 30 13:29:17.333: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6835, will wait for the garbage collector to delete the pods Mar 30 13:29:18.485: INFO: Deleting DaemonSet.extensions daemon-set took: 5.421318ms Mar 30 13:29:18.785: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248937ms Mar 30 13:29:22.289: INFO: Number of nodes with available pods: 0 Mar 30 13:29:22.289: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 13:29:22.292: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6835/daemonsets","resourceVersion":"2678729"},"items":null} Mar 30 13:29:22.295: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6835/pods","resourceVersion":"2678729"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:29:22.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6835" for this suite. Mar 30 13:29:28.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:29:28.402: INFO: namespace daemonsets-6835 deletion completed in 6.094483157s • [SLOW TEST:15.297 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:29:28.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-4845119f-576b-4762-a976-406e5d2701f9 in namespace container-probe-376 Mar 30 13:29:32.466: INFO: Started pod busybox-4845119f-576b-4762-a976-406e5d2701f9 in namespace container-probe-376 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 13:29:32.468: INFO: Initial restart count of pod busybox-4845119f-576b-4762-a976-406e5d2701f9 is 0 Mar 30 13:30:20.572: INFO: Restart count of pod container-probe-376/busybox-4845119f-576b-4762-a976-406e5d2701f9 is now 1 (48.1032172s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:30:20.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-376" for this suite. Mar 30 13:30:26.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:30:26.704: INFO: namespace container-probe-376 deletion completed in 6.091887767s • [SLOW TEST:58.301 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:30:26.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4ab544c8-3379-4398-9aa9-4d8558e6d9b4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:30:30.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3075" for this suite. Mar 30 13:30:52.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:30:52.932: INFO: namespace configmap-3075 deletion completed in 22.08660862s • [SLOW TEST:26.228 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:30:52.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6531 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 30 13:30:53.075: INFO: Found 0 stateful pods, waiting for 3 Mar 30 13:31:03.080: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:31:03.080: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:31:03.080: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 30 13:31:13.079: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:31:13.079: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:31:13.079: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 30 13:31:13.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6531 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 13:31:15.627: INFO: stderr: "I0330 13:31:15.529715 1230 log.go:172] (0xc000116630) (0xc00050a820) Create stream\nI0330 13:31:15.529747 1230 log.go:172] (0xc000116630) (0xc00050a820) Stream added, broadcasting: 1\nI0330 13:31:15.531871 1230 log.go:172] (0xc000116630) Reply frame received for 1\nI0330 13:31:15.531941 1230 log.go:172] (0xc000116630) (0xc00020e000) Create stream\nI0330 13:31:15.531982 1230 log.go:172] (0xc000116630) (0xc00020e000) Stream added, broadcasting: 3\nI0330 13:31:15.533063 1230 log.go:172] (0xc000116630) Reply frame received for 3\nI0330 13:31:15.533268 1230 log.go:172] (0xc000116630) (0xc000286000) Create stream\nI0330 13:31:15.533297 1230 log.go:172] (0xc000116630) (0xc000286000) Stream added, broadcasting: 5\nI0330 13:31:15.534462 1230 log.go:172] (0xc000116630) Reply frame received for 5\nI0330 13:31:15.595928 1230 log.go:172] (0xc000116630) Data frame received for 5\nI0330 13:31:15.595953 1230 log.go:172] (0xc000286000) (5) Data frame handling\nI0330 13:31:15.595973 1230 log.go:172] (0xc000286000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 13:31:15.620408 1230 log.go:172] (0xc000116630) Data frame received for 5\nI0330 13:31:15.620437 1230 log.go:172] (0xc000286000) (5) Data frame handling\nI0330 13:31:15.620460 1230 log.go:172] (0xc000116630) Data frame received for 3\nI0330 13:31:15.620472 1230 log.go:172] (0xc00020e000) (3) Data frame handling\nI0330 13:31:15.620484 1230 log.go:172] (0xc00020e000) (3) Data frame sent\nI0330 13:31:15.620503 1230 log.go:172] (0xc000116630) Data frame received for 3\nI0330 13:31:15.620513 1230 log.go:172] (0xc00020e000) (3) Data frame handling\nI0330 13:31:15.621970 1230 log.go:172] (0xc000116630) Data frame received for 1\nI0330 13:31:15.621999 1230 log.go:172] (0xc00050a820) (1) Data frame handling\nI0330 13:31:15.622018 1230 log.go:172] (0xc00050a820) (1) Data frame sent\nI0330 13:31:15.622033 1230 log.go:172] (0xc000116630) (0xc00050a820) Stream removed, broadcasting: 1\nI0330 13:31:15.622054 1230 log.go:172] (0xc000116630) Go away received\nI0330 13:31:15.622313 1230 log.go:172] (0xc000116630) (0xc00050a820) Stream removed, broadcasting: 1\nI0330 13:31:15.622328 1230 log.go:172] (0xc000116630) (0xc00020e000) Stream removed, broadcasting: 3\nI0330 13:31:15.622335 1230 log.go:172] (0xc000116630) (0xc000286000) Stream removed, broadcasting: 5\n" Mar 30 13:31:15.627: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 13:31:15.627: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 30 13:31:25.658: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 30 13:31:35.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6531 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 13:31:35.908: INFO: stderr: "I0330 13:31:35.821488 1259 log.go:172] (0xc0009be580) (0xc0005dad20) Create stream\nI0330 13:31:35.821546 1259 log.go:172] (0xc0009be580) (0xc0005dad20) Stream added, broadcasting: 1\nI0330 13:31:35.823900 1259 log.go:172] (0xc0009be580) Reply frame received for 1\nI0330 13:31:35.823956 1259 log.go:172] (0xc0009be580) (0xc000a0c000) Create stream\nI0330 13:31:35.823973 1259 log.go:172] (0xc0009be580) (0xc000a0c000) Stream added, broadcasting: 3\nI0330 13:31:35.825357 1259 log.go:172] (0xc0009be580) Reply frame received for 3\nI0330 13:31:35.825516 1259 log.go:172] (0xc0009be580) (0xc000aa4000) Create stream\nI0330 13:31:35.825563 1259 log.go:172] (0xc0009be580) (0xc000aa4000) Stream added, broadcasting: 5\nI0330 13:31:35.827240 1259 log.go:172] (0xc0009be580) Reply frame received for 5\nI0330 13:31:35.901727 1259 log.go:172] (0xc0009be580) Data frame received for 3\nI0330 13:31:35.901765 1259 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0330 13:31:35.901780 1259 log.go:172] (0xc000a0c000) (3) Data frame sent\nI0330 13:31:35.901818 1259 log.go:172] (0xc0009be580) Data frame received for 5\nI0330 13:31:35.901840 1259 log.go:172] (0xc000aa4000) (5) Data frame handling\nI0330 13:31:35.901863 1259 log.go:172] (0xc000aa4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 13:31:35.901894 1259 log.go:172] (0xc0009be580) Data frame received for 5\nI0330 13:31:35.901924 1259 log.go:172] (0xc0009be580) Data frame received for 3\nI0330 13:31:35.901960 1259 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0330 13:31:35.901988 1259 log.go:172] (0xc000aa4000) (5) Data frame handling\nI0330 13:31:35.903740 1259 log.go:172] (0xc0009be580) Data frame received for 1\nI0330 13:31:35.903764 1259 log.go:172] (0xc0005dad20) (1) Data frame handling\nI0330 13:31:35.903791 1259 log.go:172] (0xc0005dad20) (1) Data frame sent\nI0330 13:31:35.903812 1259 log.go:172] (0xc0009be580) (0xc0005dad20) Stream removed, broadcasting: 1\nI0330 13:31:35.903850 1259 log.go:172] (0xc0009be580) Go away received\nI0330 13:31:35.904300 1259 log.go:172] (0xc0009be580) (0xc0005dad20) Stream removed, broadcasting: 1\nI0330 13:31:35.904326 1259 log.go:172] (0xc0009be580) (0xc000a0c000) Stream removed, broadcasting: 3\nI0330 13:31:35.904341 1259 log.go:172] (0xc0009be580) (0xc000aa4000) Stream removed, broadcasting: 5\n" Mar 30 13:31:35.909: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 13:31:35.909: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 13:31:45.931: INFO: Waiting for StatefulSet statefulset-6531/ss2 to complete update Mar 30 13:31:45.931: INFO: Waiting for Pod statefulset-6531/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 30 13:31:45.931: INFO: Waiting for Pod statefulset-6531/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 30 13:31:55.938: INFO: Waiting for StatefulSet statefulset-6531/ss2 to complete update Mar 30 13:31:55.938: INFO: Waiting for Pod statefulset-6531/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 30 13:32:05.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6531 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 13:32:06.183: INFO: stderr: "I0330 13:32:06.061231 1280 log.go:172] (0xc0009ae630) (0xc000502b40) Create stream\nI0330 13:32:06.061291 1280 log.go:172] (0xc0009ae630) (0xc000502b40) Stream added, broadcasting: 1\nI0330 13:32:06.063935 1280 log.go:172] (0xc0009ae630) Reply frame received for 1\nI0330 13:32:06.064083 1280 log.go:172] (0xc0009ae630) (0xc000a0e000) Create stream\nI0330 13:32:06.064149 1280 log.go:172] (0xc0009ae630) (0xc000a0e000) Stream added, broadcasting: 3\nI0330 13:32:06.065633 1280 log.go:172] (0xc0009ae630) Reply frame received for 3\nI0330 13:32:06.065661 1280 log.go:172] (0xc0009ae630) (0xc000502280) Create stream\nI0330 13:32:06.065669 1280 log.go:172] (0xc0009ae630) (0xc000502280) Stream added, broadcasting: 5\nI0330 13:32:06.066420 1280 log.go:172] (0xc0009ae630) Reply frame received for 5\nI0330 13:32:06.146752 1280 log.go:172] (0xc0009ae630) Data frame received for 5\nI0330 13:32:06.146785 1280 log.go:172] (0xc000502280) (5) Data frame handling\nI0330 13:32:06.146804 1280 log.go:172] (0xc000502280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 13:32:06.176570 1280 log.go:172] (0xc0009ae630) Data frame received for 3\nI0330 13:32:06.176609 1280 log.go:172] (0xc000a0e000) (3) Data frame handling\nI0330 13:32:06.176623 1280 log.go:172] (0xc000a0e000) (3) Data frame sent\nI0330 13:32:06.176639 1280 log.go:172] (0xc0009ae630) Data frame received for 3\nI0330 13:32:06.176650 1280 log.go:172] (0xc000a0e000) (3) Data frame handling\nI0330 13:32:06.176826 1280 log.go:172] (0xc0009ae630) Data frame received for 5\nI0330 13:32:06.176848 1280 log.go:172] (0xc000502280) (5) Data frame handling\nI0330 13:32:06.178536 1280 log.go:172] (0xc0009ae630) Data frame received for 1\nI0330 13:32:06.178566 1280 log.go:172] (0xc000502b40) (1) Data frame handling\nI0330 13:32:06.178579 1280 log.go:172] (0xc000502b40) (1) Data frame sent\nI0330 13:32:06.178594 1280 log.go:172] (0xc0009ae630) (0xc000502b40) Stream removed, broadcasting: 1\nI0330 13:32:06.178613 1280 log.go:172] (0xc0009ae630) Go away received\nI0330 13:32:06.179013 1280 log.go:172] (0xc0009ae630) (0xc000502b40) Stream removed, broadcasting: 1\nI0330 13:32:06.179041 1280 log.go:172] (0xc0009ae630) (0xc000a0e000) Stream removed, broadcasting: 3\nI0330 13:32:06.179060 1280 log.go:172] (0xc0009ae630) (0xc000502280) Stream removed, broadcasting: 5\n" Mar 30 13:32:06.183: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 13:32:06.183: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 13:32:16.214: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 30 13:32:26.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6531 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 13:32:26.475: INFO: stderr: "I0330 13:32:26.391621 1300 log.go:172] (0xc000882420) (0xc0008fc6e0) Create stream\nI0330 13:32:26.391670 1300 log.go:172] (0xc000882420) (0xc0008fc6e0) Stream added, broadcasting: 1\nI0330 13:32:26.393768 1300 log.go:172] (0xc000882420) Reply frame received for 1\nI0330 13:32:26.393817 1300 log.go:172] (0xc000882420) (0xc0003aa3c0) Create stream\nI0330 13:32:26.393827 1300 log.go:172] (0xc000882420) (0xc0003aa3c0) Stream added, broadcasting: 3\nI0330 13:32:26.394760 1300 log.go:172] (0xc000882420) Reply frame received for 3\nI0330 13:32:26.394804 1300 log.go:172] (0xc000882420) (0xc000888000) Create stream\nI0330 13:32:26.394821 1300 log.go:172] (0xc000882420) (0xc000888000) Stream added, broadcasting: 5\nI0330 13:32:26.395618 1300 log.go:172] (0xc000882420) Reply frame received for 5\nI0330 13:32:26.469371 1300 log.go:172] (0xc000882420) Data frame received for 3\nI0330 13:32:26.469421 1300 log.go:172] (0xc000882420) Data frame received for 5\nI0330 13:32:26.469457 1300 log.go:172] (0xc000888000) (5) Data frame handling\nI0330 13:32:26.469473 1300 log.go:172] (0xc000888000) (5) Data frame sent\nI0330 13:32:26.469497 1300 log.go:172] (0xc000882420) Data frame received for 5\nI0330 13:32:26.469504 1300 log.go:172] (0xc000888000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 13:32:26.469525 1300 log.go:172] (0xc0003aa3c0) (3) Data frame handling\nI0330 13:32:26.469534 1300 log.go:172] (0xc0003aa3c0) (3) Data frame sent\nI0330 13:32:26.469538 1300 log.go:172] (0xc000882420) Data frame received for 3\nI0330 13:32:26.469542 1300 log.go:172] (0xc0003aa3c0) (3) Data frame handling\nI0330 13:32:26.470777 1300 log.go:172] (0xc000882420) Data frame received for 1\nI0330 13:32:26.470791 1300 log.go:172] (0xc0008fc6e0) (1) Data frame handling\nI0330 13:32:26.470802 1300 log.go:172] (0xc0008fc6e0) (1) Data frame sent\nI0330 13:32:26.470813 1300 log.go:172] (0xc000882420) (0xc0008fc6e0) Stream removed, broadcasting: 1\nI0330 13:32:26.470996 1300 log.go:172] (0xc000882420) Go away received\nI0330 13:32:26.471049 1300 log.go:172] (0xc000882420) (0xc0008fc6e0) Stream removed, broadcasting: 1\nI0330 13:32:26.471163 1300 log.go:172] (0xc000882420) (0xc0003aa3c0) Stream removed, broadcasting: 3\nI0330 13:32:26.471180 1300 log.go:172] (0xc000882420) (0xc000888000) Stream removed, broadcasting: 5\n" Mar 30 13:32:26.475: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 13:32:26.475: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 13:32:46.502: INFO: Waiting for StatefulSet statefulset-6531/ss2 to complete update Mar 30 13:32:46.502: INFO: Waiting for Pod statefulset-6531/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 30 13:32:56.511: INFO: Deleting all statefulset in ns statefulset-6531 Mar 30 13:32:56.512: INFO: Scaling statefulset ss2 to 0 Mar 30 13:33:16.528: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 13:33:16.531: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:33:16.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6531" for this suite. Mar 30 13:33:22.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:33:22.638: INFO: namespace statefulset-6531 deletion completed in 6.088524867s • [SLOW TEST:149.704 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:33:22.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:33:22.716: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.920236ms) Mar 30 13:33:22.720: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.744537ms) Mar 30 13:33:22.724: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.257038ms) Mar 30 13:33:22.728: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.21937ms) Mar 30 13:33:22.731: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.928538ms) Mar 30 13:33:22.734: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.908137ms) Mar 30 13:33:22.737: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.273178ms) Mar 30 13:33:22.740: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.089659ms) Mar 30 13:33:22.743: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.081432ms) Mar 30 13:33:22.746: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.2817ms) Mar 30 13:33:22.750: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.307723ms) Mar 30 13:33:22.753: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.667238ms) Mar 30 13:33:22.757: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.809006ms) Mar 30 13:33:22.761: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.77681ms) Mar 30 13:33:22.764: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.97688ms) Mar 30 13:33:22.768: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.519802ms) Mar 30 13:33:22.771: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.193392ms) Mar 30 13:33:22.792: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 20.756313ms) Mar 30 13:33:22.796: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.284345ms) Mar 30 13:33:22.800: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.801954ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:33:22.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5802" for this suite. Mar 30 13:33:28.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:33:28.896: INFO: namespace proxy-5802 deletion completed in 6.093018734s • [SLOW TEST:6.258 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:33:28.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 30 13:33:28.969: INFO: Waiting up to 5m0s for pod "pod-1ca80c3e-d0c1-4772-8683-13e991403aae" in namespace "emptydir-4560" to be "success or failure" Mar 30 13:33:28.973: INFO: Pod "pod-1ca80c3e-d0c1-4772-8683-13e991403aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227771ms Mar 30 13:33:30.983: INFO: Pod "pod-1ca80c3e-d0c1-4772-8683-13e991403aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014642984s Mar 30 13:33:32.987: INFO: Pod "pod-1ca80c3e-d0c1-4772-8683-13e991403aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018656066s STEP: Saw pod success Mar 30 13:33:32.987: INFO: Pod "pod-1ca80c3e-d0c1-4772-8683-13e991403aae" satisfied condition "success or failure" Mar 30 13:33:32.990: INFO: Trying to get logs from node iruya-worker pod pod-1ca80c3e-d0c1-4772-8683-13e991403aae container test-container: STEP: delete the pod Mar 30 13:33:33.005: INFO: Waiting for pod pod-1ca80c3e-d0c1-4772-8683-13e991403aae to disappear Mar 30 13:33:33.009: INFO: Pod pod-1ca80c3e-d0c1-4772-8683-13e991403aae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:33:33.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4560" for this suite. Mar 30 13:33:39.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:33:39.112: INFO: namespace emptydir-4560 deletion completed in 6.100090307s • [SLOW TEST:10.215 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:33:39.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:33:39.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4837" for this suite. Mar 30 13:33:45.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:33:45.317: INFO: namespace kubelet-test-4837 deletion completed in 6.085452308s • [SLOW TEST:6.205 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:33:45.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7402, will wait for the garbage collector to delete the pods Mar 30 13:33:51.464: INFO: Deleting Job.batch foo took: 6.44469ms Mar 30 13:33:51.765: INFO: Terminating Job.batch foo pods took: 300.239592ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:34:32.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7402" for this suite. Mar 30 13:34:38.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:34:38.365: INFO: namespace job-7402 deletion completed in 6.092091149s • [SLOW TEST:53.048 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:34:38.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:34:42.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2345" for this suite. Mar 30 13:35:22.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:35:22.586: INFO: namespace kubelet-test-2345 deletion completed in 40.112735324s • [SLOW TEST:44.221 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:35:22.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7851d754-12c2-4a25-90b9-7f0542fd606e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7851d754-12c2-4a25-90b9-7f0542fd606e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:36:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3916" for this suite. Mar 30 13:37:17.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:37:17.197: INFO: namespace configmap-3916 deletion completed in 22.095698479s • [SLOW TEST:114.611 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:37:17.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-9f1bb990-04da-4864-a7f3-84c3e7c976ef in namespace container-probe-4482 Mar 30 13:37:21.306: INFO: Started pod busybox-9f1bb990-04da-4864-a7f3-84c3e7c976ef in namespace container-probe-4482 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 13:37:21.309: INFO: Initial restart count of pod busybox-9f1bb990-04da-4864-a7f3-84c3e7c976ef is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:41:21.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4482" for this suite. Mar 30 13:41:27.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:41:27.971: INFO: namespace container-probe-4482 deletion completed in 6.107432331s • [SLOW TEST:250.772 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:41:27.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 30 13:41:28.051: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9598" to be "success or failure" Mar 30 13:41:28.058: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.937136ms Mar 30 13:41:30.062: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010565677s Mar 30 13:41:32.066: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014827354s Mar 30 13:41:34.071: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019545274s STEP: Saw pod success Mar 30 13:41:34.071: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 30 13:41:34.075: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 30 13:41:34.095: INFO: Waiting for pod pod-host-path-test to disappear Mar 30 13:41:34.100: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:41:34.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9598" for this suite. Mar 30 13:41:40.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:41:40.193: INFO: namespace hostpath-9598 deletion completed in 6.090374155s • [SLOW TEST:12.222 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:41:40.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-m7vm STEP: Creating a pod to test atomic-volume-subpath Mar 30 13:41:40.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m7vm" in namespace "subpath-1459" to be "success or failure" Mar 30 13:41:40.292: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.478057ms Mar 30 13:41:42.296: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00755021s Mar 30 13:41:44.300: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 4.011761204s Mar 30 13:41:46.305: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 6.016046487s Mar 30 13:41:48.308: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 8.019437848s Mar 30 13:41:50.311: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 10.022977618s Mar 30 13:41:52.321: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 12.032621871s Mar 30 13:41:54.325: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 14.036974558s Mar 30 13:41:56.330: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 16.041073965s Mar 30 13:41:58.335: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 18.046109748s Mar 30 13:42:00.338: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 20.049815886s Mar 30 13:42:02.344: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Running", Reason="", readiness=true. Elapsed: 22.055466393s Mar 30 13:42:04.348: INFO: Pod "pod-subpath-test-downwardapi-m7vm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059709838s STEP: Saw pod success Mar 30 13:42:04.348: INFO: Pod "pod-subpath-test-downwardapi-m7vm" satisfied condition "success or failure" Mar 30 13:42:04.351: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-m7vm container test-container-subpath-downwardapi-m7vm: STEP: delete the pod Mar 30 13:42:04.373: INFO: Waiting for pod pod-subpath-test-downwardapi-m7vm to disappear Mar 30 13:42:04.417: INFO: Pod pod-subpath-test-downwardapi-m7vm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-m7vm Mar 30 13:42:04.417: INFO: Deleting pod "pod-subpath-test-downwardapi-m7vm" in namespace "subpath-1459" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:42:04.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1459" for this suite. Mar 30 13:42:10.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:42:10.510: INFO: namespace subpath-1459 deletion completed in 6.086433939s • [SLOW TEST:30.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:42:10.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:42:36.596: INFO: Container started at 2020-03-30 13:42:12 +0000 UTC, pod became ready at 2020-03-30 13:42:35 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:42:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7551" for this suite. Mar 30 13:42:58.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:42:58.699: INFO: namespace container-probe-7551 deletion completed in 22.099360841s • [SLOW TEST:48.189 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:42:58.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6183.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6183.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6183.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6183.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.111.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.111.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.111.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.111.59_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6183.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6183.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6183.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6183.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6183.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6183.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 59.111.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.111.59_udp@PTR;check="$$(dig +tcp +noall +answer +search 59.111.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.111.59_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 13:43:04.844: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.851: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.854: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.877: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.882: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.885: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:04.902: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:09.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.914: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.920: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.940: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.946: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:09.960: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:14.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.913: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.916: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.932: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:14.954: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:19.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.916: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.942: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.946: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:19.967: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:24.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.914: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.935: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.937: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.940: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.943: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:24.961: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:29.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.942: INFO: Unable to read jessie_udp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.945: INFO: Unable to read jessie_tcp@dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local from pod dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732: the server could not find the requested resource (get pods dns-test-10d5115e-148d-4779-a904-3d4da5014732) Mar 30 13:43:29.968: INFO: Lookups using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 failed for: [wheezy_udp@dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@dns-test-service.dns-6183.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_udp@dns-test-service.dns-6183.svc.cluster.local jessie_tcp@dns-test-service.dns-6183.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6183.svc.cluster.local] Mar 30 13:43:34.960: INFO: DNS probes using dns-6183/dns-test-10d5115e-148d-4779-a904-3d4da5014732 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:43:35.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6183" for this suite. Mar 30 13:43:41.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:43:41.691: INFO: namespace dns-6183 deletion completed in 6.118695641s • [SLOW TEST:42.992 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:43:41.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:43:45.872: INFO: Waiting up to 5m0s for pod "client-envvars-91f022da-3950-4f7e-a002-cd25763c8440" in namespace "pods-7111" to be "success or failure" Mar 30 13:43:45.874: INFO: Pod "client-envvars-91f022da-3950-4f7e-a002-cd25763c8440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645736ms Mar 30 13:43:47.884: INFO: Pod "client-envvars-91f022da-3950-4f7e-a002-cd25763c8440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012168452s Mar 30 13:43:49.888: INFO: Pod "client-envvars-91f022da-3950-4f7e-a002-cd25763c8440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015989281s STEP: Saw pod success Mar 30 13:43:49.888: INFO: Pod "client-envvars-91f022da-3950-4f7e-a002-cd25763c8440" satisfied condition "success or failure" Mar 30 13:43:49.891: INFO: Trying to get logs from node iruya-worker pod client-envvars-91f022da-3950-4f7e-a002-cd25763c8440 container env3cont: STEP: delete the pod Mar 30 13:43:49.922: INFO: Waiting for pod client-envvars-91f022da-3950-4f7e-a002-cd25763c8440 to disappear Mar 30 13:43:49.949: INFO: Pod client-envvars-91f022da-3950-4f7e-a002-cd25763c8440 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:43:49.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7111" for this suite. Mar 30 13:44:35.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:44:36.048: INFO: namespace pods-7111 deletion completed in 46.095684161s • [SLOW TEST:54.356 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:44:36.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-77fddbd7-2580-47ff-bb36-aed7d0f62f58 STEP: Creating a pod to test consume configMaps Mar 30 13:44:36.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284" in namespace "configmap-6530" to be "success or failure" Mar 30 13:44:36.155: INFO: Pod "pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284": Phase="Pending", Reason="", readiness=false. Elapsed: 24.02489ms Mar 30 13:44:38.208: INFO: Pod "pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076653471s Mar 30 13:44:40.213: INFO: Pod "pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081304311s STEP: Saw pod success Mar 30 13:44:40.213: INFO: Pod "pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284" satisfied condition "success or failure" Mar 30 13:44:40.215: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284 container configmap-volume-test: STEP: delete the pod Mar 30 13:44:40.248: INFO: Waiting for pod pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284 to disappear Mar 30 13:44:40.262: INFO: Pod pod-configmaps-dbba609d-f466-449a-9be0-8835ea43f284 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:44:40.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6530" for this suite. Mar 30 13:44:46.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:44:46.358: INFO: namespace configmap-6530 deletion completed in 6.092190134s • [SLOW TEST:10.309 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:44:46.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-11213352-4428-4407-ba00-403632f45860 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:44:46.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-640" for this suite. Mar 30 13:44:52.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:44:52.514: INFO: namespace configmap-640 deletion completed in 6.101207663s • [SLOW TEST:6.155 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:44:52.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:44:52.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6625' Mar 30 13:44:55.222: INFO: stderr: "" Mar 30 13:44:55.222: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 30 13:44:55.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6625' Mar 30 13:44:55.529: INFO: stderr: "" Mar 30 13:44:55.529: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 30 13:44:56.534: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:44:56.534: INFO: Found 0 / 1 Mar 30 13:44:57.532: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:44:57.532: INFO: Found 0 / 1 Mar 30 13:44:58.534: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:44:58.534: INFO: Found 0 / 1 Mar 30 13:44:59.533: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:44:59.533: INFO: Found 1 / 1 Mar 30 13:44:59.533: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 30 13:44:59.536: INFO: Selector matched 1 pods for map[app:redis] Mar 30 13:44:59.536: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 13:44:59.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-fvnqj --namespace=kubectl-6625' Mar 30 13:44:59.649: INFO: stderr: "" Mar 30 13:44:59.649: INFO: stdout: "Name: redis-master-fvnqj\nNamespace: kubectl-6625\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Mon, 30 Mar 2020 13:44:55 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.94\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://bbe7625944fdc69da16aea6c4d12d18eb85e268ed176bcb2528f557e8dbdb65f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 30 Mar 2020 13:44:57 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-2r9ms (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-2r9ms:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-2r9ms\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6625/redis-master-fvnqj to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 2s kubelet, iruya-worker Started container redis-master\n" Mar 30 13:44:59.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6625' Mar 30 13:44:59.782: INFO: stderr: "" Mar 30 13:44:59.782: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6625\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-fvnqj\n" Mar 30 13:44:59.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6625' Mar 30 13:44:59.884: INFO: stderr: "" Mar 30 13:44:59.884: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6625\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.162.49\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.94:6379\nSession Affinity: None\nEvents: \n" Mar 30 13:44:59.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 30 13:45:00.005: INFO: stderr: "" Mar 30 13:45:00.005: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 30 Mar 2020 13:44:26 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 30 Mar 2020 13:44:26 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 30 Mar 2020 13:44:26 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 30 Mar 2020 13:44:26 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 14d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 14d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 30 13:45:00.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6625' Mar 30 13:45:00.122: INFO: stderr: "" Mar 30 13:45:00.122: INFO: stdout: "Name: kubectl-6625\nLabels: e2e-framework=kubectl\n e2e-run=0fa0561c-23b6-41b4-a4df-392116d28243\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:45:00.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6625" for this suite. Mar 30 13:45:22.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:45:22.240: INFO: namespace kubectl-6625 deletion completed in 22.115136997s • [SLOW TEST:29.726 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:45:22.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:45:22.348: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 30 13:45:22.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:22.361: INFO: Number of nodes with available pods: 0 Mar 30 13:45:22.361: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:23.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:23.368: INFO: Number of nodes with available pods: 0 Mar 30 13:45:23.368: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:24.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:24.368: INFO: Number of nodes with available pods: 0 Mar 30 13:45:24.368: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:25.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:25.368: INFO: Number of nodes with available pods: 0 Mar 30 13:45:25.368: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:26.376: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:26.380: INFO: Number of nodes with available pods: 2 Mar 30 13:45:26.380: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 30 13:45:26.410: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:26.410: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:26.432: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:27.437: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:27.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:27.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:28.436: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:28.436: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:28.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:29.437: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:29.437: INFO: Pod daemon-set-5692k is not available Mar 30 13:45:29.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:29.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:30.436: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:30.436: INFO: Pod daemon-set-5692k is not available Mar 30 13:45:30.436: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:30.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:31.437: INFO: Wrong image for pod: daemon-set-5692k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:31.437: INFO: Pod daemon-set-5692k is not available Mar 30 13:45:31.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:31.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:32.437: INFO: Pod daemon-set-kw745 is not available Mar 30 13:45:32.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:32.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:33.442: INFO: Pod daemon-set-kw745 is not available Mar 30 13:45:33.442: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:33.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:34.436: INFO: Pod daemon-set-kw745 is not available Mar 30 13:45:34.436: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:34.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:35.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:35.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:36.437: INFO: Wrong image for pod: daemon-set-xk62g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 30 13:45:36.437: INFO: Pod daemon-set-xk62g is not available Mar 30 13:45:36.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:37.437: INFO: Pod daemon-set-bq688 is not available Mar 30 13:45:37.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 30 13:45:37.446: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:37.448: INFO: Number of nodes with available pods: 1 Mar 30 13:45:37.449: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:38.452: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:38.454: INFO: Number of nodes with available pods: 1 Mar 30 13:45:38.454: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:39.453: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:39.456: INFO: Number of nodes with available pods: 1 Mar 30 13:45:39.456: INFO: Node iruya-worker is running more than one daemon pod Mar 30 13:45:40.454: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 13:45:40.458: INFO: Number of nodes with available pods: 2 Mar 30 13:45:40.458: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3380, will wait for the garbage collector to delete the pods Mar 30 13:45:40.529: INFO: Deleting DaemonSet.extensions daemon-set took: 5.346806ms Mar 30 13:45:40.829: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.24017ms Mar 30 13:45:52.233: INFO: Number of nodes with available pods: 0 Mar 30 13:45:52.233: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 13:45:52.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3380/daemonsets","resourceVersion":"2681596"},"items":null} Mar 30 13:45:52.239: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3380/pods","resourceVersion":"2681596"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:45:52.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3380" for this suite. Mar 30 13:45:58.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:45:58.350: INFO: namespace daemonsets-3380 deletion completed in 6.097973195s • [SLOW TEST:36.109 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:45:58.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:45:58.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062" in namespace "projected-257" to be "success or failure" Mar 30 13:45:58.443: INFO: Pod "downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062": Phase="Pending", Reason="", readiness=false. Elapsed: 18.316774ms Mar 30 13:46:00.447: INFO: Pod "downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022397149s Mar 30 13:46:02.450: INFO: Pod "downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02571824s STEP: Saw pod success Mar 30 13:46:02.450: INFO: Pod "downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062" satisfied condition "success or failure" Mar 30 13:46:02.452: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062 container client-container: STEP: delete the pod Mar 30 13:46:02.482: INFO: Waiting for pod downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062 to disappear Mar 30 13:46:02.494: INFO: Pod downwardapi-volume-1922aee5-46f8-4ccc-871d-b8665f660062 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:46:02.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-257" for this suite. Mar 30 13:46:08.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:46:08.590: INFO: namespace projected-257 deletion completed in 6.092562579s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:46:08.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 13:46:08.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6847' Mar 30 13:46:08.767: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 13:46:08.767: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 30 13:46:10.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6847' Mar 30 13:46:10.945: INFO: stderr: "" Mar 30 13:46:10.945: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:46:10.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6847" for this suite. Mar 30 13:46:33.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:46:33.214: INFO: namespace kubectl-6847 deletion completed in 22.265897839s • [SLOW TEST:24.624 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:46:33.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9348/secret-test-37740de2-852c-4668-ada2-ead7e36afa85 STEP: Creating a pod to test consume secrets Mar 30 13:46:33.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c" in namespace "secrets-9348" to be "success or failure" Mar 30 13:46:33.295: INFO: Pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.501481ms Mar 30 13:46:35.425: INFO: Pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13994141s Mar 30 13:46:37.429: INFO: Pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143913241s Mar 30 13:46:39.433: INFO: Pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147498038s STEP: Saw pod success Mar 30 13:46:39.433: INFO: Pod "pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c" satisfied condition "success or failure" Mar 30 13:46:39.436: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c container env-test: STEP: delete the pod Mar 30 13:46:39.469: INFO: Waiting for pod pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c to disappear Mar 30 13:46:39.476: INFO: Pod pod-configmaps-5f2ee761-ec8a-434b-ba91-315caa95d05c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:46:39.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9348" for this suite. Mar 30 13:46:45.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:46:45.562: INFO: namespace secrets-9348 deletion completed in 6.083248129s • [SLOW TEST:12.347 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:46:45.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-5a04d09f-745e-4101-a595-43a6776f1582 STEP: Creating a pod to test consume configMaps Mar 30 13:46:45.647: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485" in namespace "configmap-4357" to be "success or failure" Mar 30 13:46:45.701: INFO: Pod "pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485": Phase="Pending", Reason="", readiness=false. Elapsed: 53.114698ms Mar 30 13:46:47.704: INFO: Pod "pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056847251s Mar 30 13:46:49.709: INFO: Pod "pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061410536s STEP: Saw pod success Mar 30 13:46:49.709: INFO: Pod "pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485" satisfied condition "success or failure" Mar 30 13:46:49.712: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485 container configmap-volume-test: STEP: delete the pod Mar 30 13:46:49.730: INFO: Waiting for pod pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485 to disappear Mar 30 13:46:49.767: INFO: Pod pod-configmaps-a8b3910f-cbc3-4fc3-9115-1ad6cfdb3485 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:46:49.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4357" for this suite. Mar 30 13:46:55.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:46:55.870: INFO: namespace configmap-4357 deletion completed in 6.098718889s • [SLOW TEST:10.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:46:55.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8f514ad6-06a5-4ebc-b872-6fcd786cad21 STEP: Creating a pod to test consume secrets Mar 30 13:46:55.934: INFO: Waiting up to 5m0s for pod "pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3" in namespace "secrets-515" to be "success or failure" Mar 30 13:46:55.952: INFO: Pod "pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.215261ms Mar 30 13:46:57.955: INFO: Pod "pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021153032s Mar 30 13:46:59.960: INFO: Pod "pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025666044s STEP: Saw pod success Mar 30 13:46:59.960: INFO: Pod "pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3" satisfied condition "success or failure" Mar 30 13:46:59.963: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3 container secret-volume-test: STEP: delete the pod Mar 30 13:46:59.995: INFO: Waiting for pod pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3 to disappear Mar 30 13:47:00.032: INFO: Pod pod-secrets-1522d2fd-d611-4366-8364-7c606fc963c3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:47:00.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-515" for this suite. Mar 30 13:47:06.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:47:06.136: INFO: namespace secrets-515 deletion completed in 6.100829889s • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:47:06.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 30 13:47:06.215: INFO: Waiting up to 5m0s for pod "var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4" in namespace "var-expansion-629" to be "success or failure" Mar 30 13:47:06.218: INFO: Pod "var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872112ms Mar 30 13:47:08.222: INFO: Pod "var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007152472s Mar 30 13:47:10.227: INFO: Pod "var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011223166s STEP: Saw pod success Mar 30 13:47:10.227: INFO: Pod "var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4" satisfied condition "success or failure" Mar 30 13:47:10.230: INFO: Trying to get logs from node iruya-worker pod var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4 container dapi-container: STEP: delete the pod Mar 30 13:47:10.268: INFO: Waiting for pod var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4 to disappear Mar 30 13:47:10.311: INFO: Pod var-expansion-d7b2e825-cd8b-4d2f-a658-e107ac5915a4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:47:10.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-629" for this suite. Mar 30 13:47:16.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:47:16.419: INFO: namespace var-expansion-629 deletion completed in 6.098342601s • [SLOW TEST:10.282 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:47:16.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-cacd03e0-6f21-4cd1-aa47-0605a3a3c3b2 STEP: Creating a pod to test consume configMaps Mar 30 13:47:16.517: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac" in namespace "projected-3107" to be "success or failure" Mar 30 13:47:16.531: INFO: Pod "pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac": Phase="Pending", Reason="", readiness=false. Elapsed: 14.291365ms Mar 30 13:47:18.569: INFO: Pod "pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052232608s Mar 30 13:47:20.574: INFO: Pod "pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056773115s STEP: Saw pod success Mar 30 13:47:20.574: INFO: Pod "pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac" satisfied condition "success or failure" Mar 30 13:47:20.578: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac container projected-configmap-volume-test: STEP: delete the pod Mar 30 13:47:20.618: INFO: Waiting for pod pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac to disappear Mar 30 13:47:20.626: INFO: Pod pod-projected-configmaps-b0c01d4b-c02d-4c68-866f-3f1b800c3aac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:47:20.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3107" for this suite. Mar 30 13:47:26.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:47:26.744: INFO: namespace projected-3107 deletion completed in 6.113890045s • [SLOW TEST:10.325 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:47:26.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c Mar 30 13:47:26.851: INFO: Pod name my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c: Found 0 pods out of 1 Mar 30 13:47:31.856: INFO: Pod name my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c: Found 1 pods out of 1 Mar 30 13:47:31.856: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c" are running Mar 30 13:47:31.859: INFO: Pod "my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c-jsswz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 13:47:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 13:47:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 13:47:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 13:47:26 +0000 UTC Reason: Message:}]) Mar 30 13:47:31.859: INFO: Trying to dial the pod Mar 30 13:47:36.869: INFO: Controller my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c: Got expected result from replica 1 [my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c-jsswz]: "my-hostname-basic-ac0c1536-955e-4046-84e9-3f0b7ace460c-jsswz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:47:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3799" for this suite. Mar 30 13:47:42.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:47:42.975: INFO: namespace replication-controller-3799 deletion completed in 6.102024031s • [SLOW TEST:16.231 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:47:42.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-075e5525-f55d-4b48-b7e4-d7d810609b60 STEP: Creating secret with name s-test-opt-upd-fa6f6194-0c8d-4ea3-aa80-5fabb09fa32c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-075e5525-f55d-4b48-b7e4-d7d810609b60 STEP: Updating secret s-test-opt-upd-fa6f6194-0c8d-4ea3-aa80-5fabb09fa32c STEP: Creating secret with name s-test-opt-create-3a77d4af-2efd-487b-9677-8ee15346d324 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:48:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5229" for this suite. Mar 30 13:49:19.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:49:19.587: INFO: namespace projected-5229 deletion completed in 22.090834151s • [SLOW TEST:96.611 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:49:19.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:49:19.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70" in namespace "downward-api-917" to be "success or failure" Mar 30 13:49:19.652: INFO: Pod "downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70": Phase="Pending", Reason="", readiness=false. Elapsed: 12.612828ms Mar 30 13:49:21.656: INFO: Pod "downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016491136s Mar 30 13:49:23.660: INFO: Pod "downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020714952s STEP: Saw pod success Mar 30 13:49:23.661: INFO: Pod "downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70" satisfied condition "success or failure" Mar 30 13:49:23.663: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70 container client-container: STEP: delete the pod Mar 30 13:49:23.698: INFO: Waiting for pod downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70 to disappear Mar 30 13:49:23.720: INFO: Pod downwardapi-volume-5d56ec3d-5222-4482-b8ef-cffae21c5c70 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:49:23.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-917" for this suite. Mar 30 13:49:29.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:49:29.816: INFO: namespace downward-api-917 deletion completed in 6.092302067s • [SLOW TEST:10.228 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:49:29.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:50:03.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2496" for this suite. Mar 30 13:50:09.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:09.198: INFO: namespace namespaces-2496 deletion completed in 6.147928425s STEP: Destroying namespace "nsdeletetest-7329" for this suite. Mar 30 13:50:09.200: INFO: Namespace nsdeletetest-7329 was already deleted STEP: Destroying namespace "nsdeletetest-1224" for this suite. Mar 30 13:50:15.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:15.283: INFO: namespace nsdeletetest-1224 deletion completed in 6.082671159s • [SLOW TEST:45.467 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:50:15.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 30 13:50:15.355: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 30 13:50:16.153: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 30 13:50:18.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173016, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173016, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173016, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173016, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 13:50:21.502: INFO: Waited 1.123102247s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:50:21.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5836" for this suite. Mar 30 13:50:28.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:28.114: INFO: namespace aggregator-5836 deletion completed in 6.17603429s • [SLOW TEST:12.830 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:50:28.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 30 13:50:28.195: INFO: Waiting up to 5m0s for pod "downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a" in namespace "downward-api-1903" to be "success or failure" Mar 30 13:50:28.214: INFO: Pod "downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.793524ms Mar 30 13:50:30.272: INFO: Pod "downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077639195s Mar 30 13:50:32.277: INFO: Pod "downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082287877s STEP: Saw pod success Mar 30 13:50:32.277: INFO: Pod "downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a" satisfied condition "success or failure" Mar 30 13:50:32.280: INFO: Trying to get logs from node iruya-worker2 pod downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a container dapi-container: STEP: delete the pod Mar 30 13:50:32.302: INFO: Waiting for pod downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a to disappear Mar 30 13:50:32.318: INFO: Pod downward-api-06f3827b-45d0-44e0-b34f-62aea1fce43a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:50:32.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1903" for this suite. Mar 30 13:50:38.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:38.419: INFO: namespace downward-api-1903 deletion completed in 6.097870355s • [SLOW TEST:10.304 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:50:38.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-190ff114-e93e-4008-8581-527d319d2732 STEP: Creating a pod to test consume secrets Mar 30 13:50:38.487: INFO: Waiting up to 5m0s for pod "pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb" in namespace "secrets-2326" to be "success or failure" Mar 30 13:50:38.493: INFO: Pod "pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.063626ms Mar 30 13:50:40.496: INFO: Pod "pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009045255s Mar 30 13:50:42.501: INFO: Pod "pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013507213s STEP: Saw pod success Mar 30 13:50:42.501: INFO: Pod "pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb" satisfied condition "success or failure" Mar 30 13:50:42.504: INFO: Trying to get logs from node iruya-worker pod pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb container secret-volume-test: STEP: delete the pod Mar 30 13:50:42.530: INFO: Waiting for pod pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb to disappear Mar 30 13:50:42.534: INFO: Pod pod-secrets-d6b36930-a3b3-45fc-9365-7301bdad7beb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:50:42.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2326" for this suite. Mar 30 13:50:48.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:48.683: INFO: namespace secrets-2326 deletion completed in 6.145302141s • [SLOW TEST:10.264 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:50:48.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9aef9a63-97d9-4c12-aef0-d1e7a7117250 STEP: Creating a pod to test consume configMaps Mar 30 13:50:48.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e" in namespace "configmap-4105" to be "success or failure" Mar 30 13:50:48.804: INFO: Pod "pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.256741ms Mar 30 13:50:50.808: INFO: Pod "pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01975248s Mar 30 13:50:52.812: INFO: Pod "pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023632669s STEP: Saw pod success Mar 30 13:50:52.812: INFO: Pod "pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e" satisfied condition "success or failure" Mar 30 13:50:52.814: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e container configmap-volume-test: STEP: delete the pod Mar 30 13:50:52.850: INFO: Waiting for pod pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e to disappear Mar 30 13:50:52.863: INFO: Pod pod-configmaps-16395903-819b-47ca-9eb1-b0074043a21e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:50:52.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4105" for this suite. Mar 30 13:50:58.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:50:58.964: INFO: namespace configmap-4105 deletion completed in 6.097394899s • [SLOW TEST:10.280 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:50:58.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 13:50:59.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6776' Mar 30 13:50:59.157: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 13:50:59.157: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 30 13:50:59.217: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-k2qlg] Mar 30 13:50:59.218: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-k2qlg" in namespace "kubectl-6776" to be "running and ready" Mar 30 13:50:59.273: INFO: Pod "e2e-test-nginx-rc-k2qlg": Phase="Pending", Reason="", readiness=false. Elapsed: 54.913736ms Mar 30 13:51:01.276: INFO: Pod "e2e-test-nginx-rc-k2qlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058586851s Mar 30 13:51:03.280: INFO: Pod "e2e-test-nginx-rc-k2qlg": Phase="Running", Reason="", readiness=true. Elapsed: 4.062803256s Mar 30 13:51:03.280: INFO: Pod "e2e-test-nginx-rc-k2qlg" satisfied condition "running and ready" Mar 30 13:51:03.280: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-k2qlg] Mar 30 13:51:03.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6776' Mar 30 13:51:03.416: INFO: stderr: "" Mar 30 13:51:03.416: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 30 13:51:03.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6776' Mar 30 13:51:03.526: INFO: stderr: "" Mar 30 13:51:03.526: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:51:03.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6776" for this suite. Mar 30 13:51:25.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:51:25.620: INFO: namespace kubectl-6776 deletion completed in 22.090292951s • [SLOW TEST:26.656 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:51:25.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:51:25.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e" in namespace "downward-api-1426" to be "success or failure" Mar 30 13:51:25.691: INFO: Pod "downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.711851ms Mar 30 13:51:27.696: INFO: Pod "downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00879594s Mar 30 13:51:29.701: INFO: Pod "downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013875493s STEP: Saw pod success Mar 30 13:51:29.701: INFO: Pod "downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e" satisfied condition "success or failure" Mar 30 13:51:29.704: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e container client-container: STEP: delete the pod Mar 30 13:51:29.738: INFO: Waiting for pod downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e to disappear Mar 30 13:51:29.744: INFO: Pod downwardapi-volume-88c99aaf-9e72-4a74-acf3-1e7af919474e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:51:29.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1426" for this suite. Mar 30 13:51:35.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:51:35.854: INFO: namespace downward-api-1426 deletion completed in 6.106449445s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:51:35.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 30 13:51:35.904: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:51:43.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2552" for this suite. Mar 30 13:51:49.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:51:49.659: INFO: namespace init-container-2552 deletion completed in 6.149680279s • [SLOW TEST:13.805 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:51:49.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 30 13:51:49.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8077' Mar 30 13:51:50.056: INFO: stderr: "" Mar 30 13:51:50.056: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:51:50.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8077' Mar 30 13:51:50.167: INFO: stderr: "" Mar 30 13:51:50.167: INFO: stdout: "update-demo-nautilus-d8ccl update-demo-nautilus-n457l " Mar 30 13:51:50.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8ccl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8077' Mar 30 13:51:50.249: INFO: stderr: "" Mar 30 13:51:50.249: INFO: stdout: "" Mar 30 13:51:50.249: INFO: update-demo-nautilus-d8ccl is created but not running Mar 30 13:51:55.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8077' Mar 30 13:51:55.346: INFO: stderr: "" Mar 30 13:51:55.346: INFO: stdout: "update-demo-nautilus-d8ccl update-demo-nautilus-n457l " Mar 30 13:51:55.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8ccl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8077' Mar 30 13:51:55.428: INFO: stderr: "" Mar 30 13:51:55.428: INFO: stdout: "true" Mar 30 13:51:55.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8ccl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8077' Mar 30 13:51:55.516: INFO: stderr: "" Mar 30 13:51:55.516: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:51:55.516: INFO: validating pod update-demo-nautilus-d8ccl Mar 30 13:51:55.520: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:51:55.520: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:51:55.520: INFO: update-demo-nautilus-d8ccl is verified up and running Mar 30 13:51:55.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n457l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8077' Mar 30 13:51:55.616: INFO: stderr: "" Mar 30 13:51:55.616: INFO: stdout: "true" Mar 30 13:51:55.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n457l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8077' Mar 30 13:51:55.710: INFO: stderr: "" Mar 30 13:51:55.710: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:51:55.710: INFO: validating pod update-demo-nautilus-n457l Mar 30 13:51:55.714: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:51:55.714: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:51:55.714: INFO: update-demo-nautilus-n457l is verified up and running STEP: using delete to clean up resources Mar 30 13:51:55.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8077' Mar 30 13:51:55.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 13:51:55.820: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 30 13:51:55.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8077' Mar 30 13:51:55.924: INFO: stderr: "No resources found.\n" Mar 30 13:51:55.924: INFO: stdout: "" Mar 30 13:51:55.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8077 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:51:56.011: INFO: stderr: "" Mar 30 13:51:56.011: INFO: stdout: "update-demo-nautilus-d8ccl\nupdate-demo-nautilus-n457l\n" Mar 30 13:51:56.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8077' Mar 30 13:51:56.609: INFO: stderr: "No resources found.\n" Mar 30 13:51:56.609: INFO: stdout: "" Mar 30 13:51:56.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8077 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 13:51:56.703: INFO: stderr: "" Mar 30 13:51:56.703: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:51:56.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8077" for this suite. Mar 30 13:52:02.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:52:02.953: INFO: namespace kubectl-8077 deletion completed in 6.246569118s • [SLOW TEST:13.294 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:52:02.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 30 13:52:03.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2281' Mar 30 13:52:03.338: INFO: stderr: "" Mar 30 13:52:03.338: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:52:03.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2281' Mar 30 13:52:03.444: INFO: stderr: "" Mar 30 13:52:03.444: INFO: stdout: "update-demo-nautilus-74rdt update-demo-nautilus-lrjks " Mar 30 13:52:03.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74rdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:03.528: INFO: stderr: "" Mar 30 13:52:03.528: INFO: stdout: "" Mar 30 13:52:03.528: INFO: update-demo-nautilus-74rdt is created but not running Mar 30 13:52:08.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2281' Mar 30 13:52:08.624: INFO: stderr: "" Mar 30 13:52:08.624: INFO: stdout: "update-demo-nautilus-74rdt update-demo-nautilus-lrjks " Mar 30 13:52:08.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74rdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:08.715: INFO: stderr: "" Mar 30 13:52:08.715: INFO: stdout: "true" Mar 30 13:52:08.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74rdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:08.814: INFO: stderr: "" Mar 30 13:52:08.814: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:52:08.814: INFO: validating pod update-demo-nautilus-74rdt Mar 30 13:52:08.818: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:52:08.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:52:08.818: INFO: update-demo-nautilus-74rdt is verified up and running Mar 30 13:52:08.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrjks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:08.917: INFO: stderr: "" Mar 30 13:52:08.918: INFO: stdout: "true" Mar 30 13:52:08.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrjks -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:09.011: INFO: stderr: "" Mar 30 13:52:09.011: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 13:52:09.011: INFO: validating pod update-demo-nautilus-lrjks Mar 30 13:52:09.015: INFO: got data: { "image": "nautilus.jpg" } Mar 30 13:52:09.015: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 13:52:09.015: INFO: update-demo-nautilus-lrjks is verified up and running STEP: rolling-update to new replication controller Mar 30 13:52:09.017: INFO: scanned /root for discovery docs: Mar 30 13:52:09.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2281' Mar 30 13:52:31.552: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 30 13:52:31.552: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 13:52:31.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2281' Mar 30 13:52:31.649: INFO: stderr: "" Mar 30 13:52:31.649: INFO: stdout: "update-demo-kitten-hdc74 update-demo-kitten-tr7cf " Mar 30 13:52:31.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hdc74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:31.735: INFO: stderr: "" Mar 30 13:52:31.735: INFO: stdout: "true" Mar 30 13:52:31.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hdc74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:31.822: INFO: stderr: "" Mar 30 13:52:31.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 30 13:52:31.822: INFO: validating pod update-demo-kitten-hdc74 Mar 30 13:52:31.825: INFO: got data: { "image": "kitten.jpg" } Mar 30 13:52:31.825: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 30 13:52:31.825: INFO: update-demo-kitten-hdc74 is verified up and running Mar 30 13:52:31.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tr7cf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:31.924: INFO: stderr: "" Mar 30 13:52:31.924: INFO: stdout: "true" Mar 30 13:52:31.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tr7cf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2281' Mar 30 13:52:32.018: INFO: stderr: "" Mar 30 13:52:32.018: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 30 13:52:32.018: INFO: validating pod update-demo-kitten-tr7cf Mar 30 13:52:32.022: INFO: got data: { "image": "kitten.jpg" } Mar 30 13:52:32.022: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 30 13:52:32.022: INFO: update-demo-kitten-tr7cf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:52:32.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2281" for this suite. Mar 30 13:52:54.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:52:54.117: INFO: namespace kubectl-2281 deletion completed in 22.092035795s • [SLOW TEST:51.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:52:54.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 13:52:58.242: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:52:58.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9958" for this suite. Mar 30 13:53:04.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:53:04.372: INFO: namespace container-runtime-9958 deletion completed in 6.113769712s • [SLOW TEST:10.254 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:53:04.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b6217245-3d0d-43e6-874b-359407a564c3 STEP: Creating configMap with name cm-test-opt-upd-ec758213-2c4e-4f4f-8497-f2fa3625cf2e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b6217245-3d0d-43e6-874b-359407a564c3 STEP: Updating configmap cm-test-opt-upd-ec758213-2c4e-4f4f-8497-f2fa3625cf2e STEP: Creating configMap with name cm-test-opt-create-c0f259b4-a41a-4cae-927e-8252f1f410de STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:54:32.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5353" for this suite. Mar 30 13:54:46.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:54:47.052: INFO: namespace configmap-5353 deletion completed in 14.099112474s • [SLOW TEST:102.679 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:54:47.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:54:47.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a" in namespace "downward-api-5960" to be "success or failure" Mar 30 13:54:47.145: INFO: Pod "downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488454ms Mar 30 13:54:49.156: INFO: Pod "downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014162889s Mar 30 13:54:51.160: INFO: Pod "downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018488011s STEP: Saw pod success Mar 30 13:54:51.160: INFO: Pod "downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a" satisfied condition "success or failure" Mar 30 13:54:51.163: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a container client-container: STEP: delete the pod Mar 30 13:54:51.182: INFO: Waiting for pod downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a to disappear Mar 30 13:54:51.203: INFO: Pod downwardapi-volume-a2d4bd04-6808-4ffd-b1bc-46b1d206048a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:54:51.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5960" for this suite. Mar 30 13:54:57.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:54:57.297: INFO: namespace downward-api-5960 deletion completed in 6.090961671s • [SLOW TEST:10.244 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:54:57.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 13:54:57.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 30 13:54:57.459: INFO: stderr: "" Mar 30 13:54:57.459: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:54:57.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3133" for this suite. Mar 30 13:55:03.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:55:03.553: INFO: namespace kubectl-3133 deletion completed in 6.089745608s • [SLOW TEST:6.256 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:55:03.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 30 13:55:03.603: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 13:55:03.609: INFO: Waiting for terminating namespaces to be deleted... Mar 30 13:55:03.611: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 30 13:55:03.616: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.616: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 13:55:03.616: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.616: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 13:55:03.616: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 30 13:55:03.622: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.622: INFO: Container coredns ready: true, restart count 0 Mar 30 13:55:03.622: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.622: INFO: Container coredns ready: true, restart count 0 Mar 30 13:55:03.622: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.622: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 13:55:03.622: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 30 13:55:03.622: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-77bfaa10-6be1-4cf3-b9dc-7ebc2c519293 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-77bfaa10-6be1-4cf3-b9dc-7ebc2c519293 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-77bfaa10-6be1-4cf3-b9dc-7ebc2c519293 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:55:11.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2589" for this suite. Mar 30 13:55:25.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:55:25.901: INFO: namespace sched-pred-2589 deletion completed in 14.129368346s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:22.348 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:55:25.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 30 13:55:29.999: INFO: Pod pod-hostip-e527bc42-7d6a-4bba-94a5-95b7b546849b has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:55:30.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6438" for this suite. Mar 30 13:55:52.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:55:52.118: INFO: namespace pods-6438 deletion completed in 22.113496891s • [SLOW TEST:26.216 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:55:52.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-45958170-05ce-4444-8893-8f7dccf031a8 STEP: Creating a pod to test consume secrets Mar 30 13:55:52.182: INFO: Waiting up to 5m0s for pod "pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81" in namespace "secrets-2942" to be "success or failure" Mar 30 13:55:52.198: INFO: Pod "pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81": Phase="Pending", Reason="", readiness=false. Elapsed: 15.443513ms Mar 30 13:55:54.202: INFO: Pod "pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019515882s Mar 30 13:55:56.206: INFO: Pod "pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02387894s STEP: Saw pod success Mar 30 13:55:56.206: INFO: Pod "pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81" satisfied condition "success or failure" Mar 30 13:55:56.209: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81 container secret-volume-test: STEP: delete the pod Mar 30 13:55:56.242: INFO: Waiting for pod pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81 to disappear Mar 30 13:55:56.264: INFO: Pod pod-secrets-4fece133-39cc-4493-85a7-e317b02d1c81 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:55:56.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2942" for this suite. Mar 30 13:56:02.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:56:02.359: INFO: namespace secrets-2942 deletion completed in 6.091038419s • [SLOW TEST:10.241 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:56:02.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0330 13:56:03.479837 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 13:56:03.479: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:56:03.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5228" for this suite. Mar 30 13:56:09.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:56:09.591: INFO: namespace gc-5228 deletion completed in 6.108611356s • [SLOW TEST:7.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:56:09.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 30 13:56:09.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 30 13:56:09.795: INFO: stderr: "" Mar 30 13:56:09.795: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:56:09.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3876" for this suite. Mar 30 13:56:15.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:56:15.895: INFO: namespace kubectl-3876 deletion completed in 6.095093398s • [SLOW TEST:6.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:56:15.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b81d40ec-e3d7-4a1c-bb69-7bc6b63a65de STEP: Creating configMap with name cm-test-opt-upd-ab1334ed-8d48-450c-a6b8-461cb5cf6840 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b81d40ec-e3d7-4a1c-bb69-7bc6b63a65de STEP: Updating configmap cm-test-opt-upd-ab1334ed-8d48-450c-a6b8-461cb5cf6840 STEP: Creating configMap with name cm-test-opt-create-fc9aacb1-8759-4d67-be05-c53d020ae7a9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:57:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6845" for this suite. Mar 30 13:57:52.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:57:52.539: INFO: namespace projected-6845 deletion completed in 22.10894536s • [SLOW TEST:96.643 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:57:52.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-7d67cbfe-294b-46bf-a667-2f64e38ce4ff STEP: Creating a pod to test consume secrets Mar 30 13:57:52.612: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96" in namespace "projected-4757" to be "success or failure" Mar 30 13:57:52.632: INFO: Pod "pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96": Phase="Pending", Reason="", readiness=false. Elapsed: 20.427532ms Mar 30 13:57:54.637: INFO: Pod "pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025341144s Mar 30 13:57:56.642: INFO: Pod "pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029916754s STEP: Saw pod success Mar 30 13:57:56.642: INFO: Pod "pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96" satisfied condition "success or failure" Mar 30 13:57:56.645: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96 container projected-secret-volume-test: STEP: delete the pod Mar 30 13:57:56.676: INFO: Waiting for pod pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96 to disappear Mar 30 13:57:56.688: INFO: Pod pod-projected-secrets-d62e71db-7a4e-4c3e-95c2-99febb9b4b96 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:57:56.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4757" for this suite. Mar 30 13:58:02.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:58:02.782: INFO: namespace projected-4757 deletion completed in 6.091208103s • [SLOW TEST:10.243 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:58:02.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6302/configmap-test-f6ca185d-0f37-4a85-8d9f-fb8d6d081d5d STEP: Creating a pod to test consume configMaps Mar 30 13:58:02.875: INFO: Waiting up to 5m0s for pod "pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89" in namespace "configmap-6302" to be "success or failure" Mar 30 13:58:02.931: INFO: Pod "pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89": Phase="Pending", Reason="", readiness=false. Elapsed: 56.024832ms Mar 30 13:58:04.936: INFO: Pod "pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060180591s Mar 30 13:58:06.940: INFO: Pod "pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06405835s STEP: Saw pod success Mar 30 13:58:06.940: INFO: Pod "pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89" satisfied condition "success or failure" Mar 30 13:58:06.943: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89 container env-test: STEP: delete the pod Mar 30 13:58:06.971: INFO: Waiting for pod pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89 to disappear Mar 30 13:58:06.981: INFO: Pod pod-configmaps-93cc416c-9e10-422f-9559-0e3b4958fc89 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:58:06.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6302" for this suite. Mar 30 13:58:12.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:58:13.073: INFO: namespace configmap-6302 deletion completed in 6.088440507s • [SLOW TEST:10.290 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:58:13.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 30 13:58:13.140: INFO: Waiting up to 5m0s for pod "downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f" in namespace "downward-api-4838" to be "success or failure" Mar 30 13:58:13.177: INFO: Pod "downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.505807ms Mar 30 13:58:15.181: INFO: Pod "downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040783547s Mar 30 13:58:17.185: INFO: Pod "downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044831491s STEP: Saw pod success Mar 30 13:58:17.185: INFO: Pod "downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f" satisfied condition "success or failure" Mar 30 13:58:17.188: INFO: Trying to get logs from node iruya-worker pod downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f container dapi-container: STEP: delete the pod Mar 30 13:58:17.218: INFO: Waiting for pod downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f to disappear Mar 30 13:58:17.243: INFO: Pod downward-api-2f750811-5390-4e28-9ddd-4c1d46f1ff2f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:58:17.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4838" for this suite. Mar 30 13:58:23.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:58:23.344: INFO: namespace downward-api-4838 deletion completed in 6.097106871s • [SLOW TEST:10.271 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:58:23.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 13:58:23.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a" in namespace "projected-8410" to be "success or failure" Mar 30 13:58:23.427: INFO: Pod "downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.420505ms Mar 30 13:58:25.431: INFO: Pod "downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007203272s Mar 30 13:58:27.435: INFO: Pod "downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01083718s STEP: Saw pod success Mar 30 13:58:27.435: INFO: Pod "downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a" satisfied condition "success or failure" Mar 30 13:58:27.438: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a container client-container: STEP: delete the pod Mar 30 13:58:27.496: INFO: Waiting for pod downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a to disappear Mar 30 13:58:27.505: INFO: Pod downwardapi-volume-7cc45473-30fb-4d5f-8d72-6336c186458a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:58:27.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8410" for this suite. Mar 30 13:58:33.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:58:33.631: INFO: namespace projected-8410 deletion completed in 6.123391217s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:58:33.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 30 13:58:33.669: INFO: PodSpec: initContainers in spec.initContainers Mar 30 13:59:24.416: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9b6dd3ca-2b9c-4c90-9c2a-455d3119da9f", GenerateName:"", Namespace:"init-container-7504", SelfLink:"/api/v1/namespaces/init-container-7504/pods/pod-init-9b6dd3ca-2b9c-4c90-9c2a-455d3119da9f", UID:"169fd3f4-21f6-49a3-8955-95f44fe5ddb8", ResourceVersion:"2684359", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721173513, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"669338931"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5bv6m", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00150b000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5bv6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5bv6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5bv6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00269c5d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00229c9c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00269c760)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00269c780)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00269c788), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00269c78c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173513, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173513, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173513, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721173513, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.114", StartTime:(*v1.Time)(0xc002dae140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002dae180), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025fc150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b16a658cd9bb244e5dda1ff8090e391f03ca540892a1ca018e2b61349a184bc0"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002dae1a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002dae160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 13:59:24.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7504" for this suite. Mar 30 13:59:46.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 13:59:46.524: INFO: namespace init-container-7504 deletion completed in 22.094259545s • [SLOW TEST:72.892 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 13:59:46.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 13:59:46.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5389' Mar 30 13:59:48.967: INFO: stderr: "" Mar 30 13:59:48.967: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 30 13:59:54.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5389 -o json' Mar 30 13:59:54.117: INFO: stderr: "" Mar 30 13:59:54.117: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-30T13:59:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-5389\",\n \"resourceVersion\": \"2684441\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5389/pods/e2e-test-nginx-pod\",\n \"uid\": \"f75d0a8e-ad54-440f-8e76-452b7adfdccd\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hqhrt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hqhrt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hqhrt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T13:59:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T13:59:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T13:59:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T13:59:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://23b040c8af54b75ede1d6c933e4324c07fc470f6a2224369dfd6379f3c09bbee\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-30T13:59:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.115\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-30T13:59:48Z\"\n }\n}\n" STEP: replace the image in the pod Mar 30 13:59:54.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5389' Mar 30 13:59:54.376: INFO: stderr: "" Mar 30 13:59:54.376: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 30 13:59:54.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5389' Mar 30 14:00:02.179: INFO: stderr: "" Mar 30 14:00:02.179: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:00:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5389" for this suite. Mar 30 14:00:08.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:00:08.279: INFO: namespace kubectl-5389 deletion completed in 6.089257038s • [SLOW TEST:21.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:00:08.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2169 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2169 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2169 Mar 30 14:00:08.369: INFO: Found 0 stateful pods, waiting for 1 Mar 30 14:00:18.374: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 30 14:00:18.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:00:18.648: INFO: stderr: "I0330 14:00:18.506007 2257 log.go:172] (0xc0009204d0) (0xc0004be6e0) Create stream\nI0330 14:00:18.506114 2257 log.go:172] (0xc0009204d0) (0xc0004be6e0) Stream added, broadcasting: 1\nI0330 14:00:18.509382 2257 log.go:172] (0xc0009204d0) Reply frame received for 1\nI0330 14:00:18.509454 2257 log.go:172] (0xc0009204d0) (0xc00076a000) Create stream\nI0330 14:00:18.509490 2257 log.go:172] (0xc0009204d0) (0xc00076a000) Stream added, broadcasting: 3\nI0330 14:00:18.510696 2257 log.go:172] (0xc0009204d0) Reply frame received for 3\nI0330 14:00:18.510768 2257 log.go:172] (0xc0009204d0) (0xc0007dc000) Create stream\nI0330 14:00:18.510784 2257 log.go:172] (0xc0009204d0) (0xc0007dc000) Stream added, broadcasting: 5\nI0330 14:00:18.511835 2257 log.go:172] (0xc0009204d0) Reply frame received for 5\nI0330 14:00:18.604056 2257 log.go:172] (0xc0009204d0) Data frame received for 5\nI0330 14:00:18.604079 2257 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0330 14:00:18.604094 2257 log.go:172] (0xc0007dc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:00:18.642248 2257 log.go:172] (0xc0009204d0) Data frame received for 3\nI0330 14:00:18.642297 2257 log.go:172] (0xc00076a000) (3) Data frame handling\nI0330 14:00:18.642330 2257 log.go:172] (0xc00076a000) (3) Data frame sent\nI0330 14:00:18.642346 2257 log.go:172] (0xc0009204d0) Data frame received for 3\nI0330 14:00:18.642358 2257 log.go:172] (0xc00076a000) (3) Data frame handling\nI0330 14:00:18.642488 2257 log.go:172] (0xc0009204d0) Data frame received for 5\nI0330 14:00:18.642515 2257 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0330 14:00:18.643926 2257 log.go:172] (0xc0009204d0) Data frame received for 1\nI0330 14:00:18.643955 2257 log.go:172] (0xc0004be6e0) (1) Data frame handling\nI0330 14:00:18.643970 2257 log.go:172] (0xc0004be6e0) (1) Data frame sent\nI0330 14:00:18.643990 2257 log.go:172] (0xc0009204d0) (0xc0004be6e0) Stream removed, broadcasting: 1\nI0330 14:00:18.644020 2257 log.go:172] (0xc0009204d0) Go away received\nI0330 14:00:18.644377 2257 log.go:172] (0xc0009204d0) (0xc0004be6e0) Stream removed, broadcasting: 1\nI0330 14:00:18.644412 2257 log.go:172] (0xc0009204d0) (0xc00076a000) Stream removed, broadcasting: 3\nI0330 14:00:18.644433 2257 log.go:172] (0xc0009204d0) (0xc0007dc000) Stream removed, broadcasting: 5\n" Mar 30 14:00:18.648: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:00:18.648: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:00:18.653: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 30 14:00:28.657: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:00:28.657: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:00:28.685: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999964s Mar 30 14:00:29.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980572867s Mar 30 14:00:30.692: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977081888s Mar 30 14:00:31.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973620207s Mar 30 14:00:32.723: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.948052232s Mar 30 14:00:33.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.94311206s Mar 30 14:00:34.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.938052711s Mar 30 14:00:35.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.933720155s Mar 30 14:00:36.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.928759409s Mar 30 14:00:37.747: INFO: Verifying statefulset ss doesn't scale past 1 for another 924.186294ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2169 Mar 30 14:00:38.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:00:38.997: INFO: stderr: "I0330 14:00:38.895785 2278 log.go:172] (0xc000116fd0) (0xc0000d6aa0) Create stream\nI0330 14:00:38.895856 2278 log.go:172] (0xc000116fd0) (0xc0000d6aa0) Stream added, broadcasting: 1\nI0330 14:00:38.907126 2278 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0330 14:00:38.907192 2278 log.go:172] (0xc000116fd0) (0xc0000d6320) Create stream\nI0330 14:00:38.907206 2278 log.go:172] (0xc000116fd0) (0xc0000d6320) Stream added, broadcasting: 3\nI0330 14:00:38.908234 2278 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0330 14:00:38.908267 2278 log.go:172] (0xc000116fd0) (0xc0003ee000) Create stream\nI0330 14:00:38.908283 2278 log.go:172] (0xc000116fd0) (0xc0003ee000) Stream added, broadcasting: 5\nI0330 14:00:38.910787 2278 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0330 14:00:38.991261 2278 log.go:172] (0xc000116fd0) Data frame received for 5\nI0330 14:00:38.991333 2278 log.go:172] (0xc000116fd0) Data frame received for 3\nI0330 14:00:38.991376 2278 log.go:172] (0xc0000d6320) (3) Data frame handling\nI0330 14:00:38.991413 2278 log.go:172] (0xc0000d6320) (3) Data frame sent\nI0330 14:00:38.991445 2278 log.go:172] (0xc000116fd0) Data frame received for 3\nI0330 14:00:38.991467 2278 log.go:172] (0xc0000d6320) (3) Data frame handling\nI0330 14:00:38.991500 2278 log.go:172] (0xc0003ee000) (5) Data frame handling\nI0330 14:00:38.991521 2278 log.go:172] (0xc0003ee000) (5) Data frame sent\nI0330 14:00:38.991539 2278 log.go:172] (0xc000116fd0) Data frame received for 5\nI0330 14:00:38.991555 2278 log.go:172] (0xc0003ee000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 14:00:38.992846 2278 log.go:172] (0xc000116fd0) Data frame received for 1\nI0330 14:00:38.992873 2278 log.go:172] (0xc0000d6aa0) (1) Data frame handling\nI0330 14:00:38.992905 2278 log.go:172] (0xc0000d6aa0) (1) Data frame sent\nI0330 14:00:38.992933 2278 log.go:172] (0xc000116fd0) (0xc0000d6aa0) Stream removed, broadcasting: 1\nI0330 14:00:38.993059 2278 log.go:172] (0xc000116fd0) Go away received\nI0330 14:00:38.993626 2278 log.go:172] (0xc000116fd0) (0xc0000d6aa0) Stream removed, broadcasting: 1\nI0330 14:00:38.993658 2278 log.go:172] (0xc000116fd0) (0xc0000d6320) Stream removed, broadcasting: 3\nI0330 14:00:38.993672 2278 log.go:172] (0xc000116fd0) (0xc0003ee000) Stream removed, broadcasting: 5\n" Mar 30 14:00:38.998: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:00:38.998: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:00:39.001: INFO: Found 1 stateful pods, waiting for 3 Mar 30 14:00:49.006: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 14:00:49.006: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 14:00:49.006: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 30 14:00:49.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:00:49.235: INFO: stderr: "I0330 14:00:49.133667 2298 log.go:172] (0xc000a8a420) (0xc0005a2aa0) Create stream\nI0330 14:00:49.133714 2298 log.go:172] (0xc000a8a420) (0xc0005a2aa0) Stream added, broadcasting: 1\nI0330 14:00:49.136779 2298 log.go:172] (0xc000a8a420) Reply frame received for 1\nI0330 14:00:49.136827 2298 log.go:172] (0xc000a8a420) (0xc0005a23c0) Create stream\nI0330 14:00:49.136842 2298 log.go:172] (0xc000a8a420) (0xc0005a23c0) Stream added, broadcasting: 3\nI0330 14:00:49.137898 2298 log.go:172] (0xc000a8a420) Reply frame received for 3\nI0330 14:00:49.137928 2298 log.go:172] (0xc000a8a420) (0xc0007fa000) Create stream\nI0330 14:00:49.137938 2298 log.go:172] (0xc000a8a420) (0xc0007fa000) Stream added, broadcasting: 5\nI0330 14:00:49.138878 2298 log.go:172] (0xc000a8a420) Reply frame received for 5\nI0330 14:00:49.229622 2298 log.go:172] (0xc000a8a420) Data frame received for 5\nI0330 14:00:49.229654 2298 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0330 14:00:49.229664 2298 log.go:172] (0xc0007fa000) (5) Data frame sent\nI0330 14:00:49.229674 2298 log.go:172] (0xc000a8a420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:00:49.229710 2298 log.go:172] (0xc000a8a420) Data frame received for 3\nI0330 14:00:49.229760 2298 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0330 14:00:49.229785 2298 log.go:172] (0xc0005a23c0) (3) Data frame sent\nI0330 14:00:49.229801 2298 log.go:172] (0xc000a8a420) Data frame received for 3\nI0330 14:00:49.229810 2298 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0330 14:00:49.229824 2298 log.go:172] (0xc0007fa000) (5) Data frame handling\nI0330 14:00:49.231224 2298 log.go:172] (0xc000a8a420) Data frame received for 1\nI0330 14:00:49.231248 2298 log.go:172] (0xc0005a2aa0) (1) Data frame handling\nI0330 14:00:49.231268 2298 log.go:172] (0xc0005a2aa0) (1) Data frame sent\nI0330 14:00:49.231292 2298 log.go:172] (0xc000a8a420) (0xc0005a2aa0) Stream removed, broadcasting: 1\nI0330 14:00:49.231315 2298 log.go:172] (0xc000a8a420) Go away received\nI0330 14:00:49.231600 2298 log.go:172] (0xc000a8a420) (0xc0005a2aa0) Stream removed, broadcasting: 1\nI0330 14:00:49.231619 2298 log.go:172] (0xc000a8a420) (0xc0005a23c0) Stream removed, broadcasting: 3\nI0330 14:00:49.231627 2298 log.go:172] (0xc000a8a420) (0xc0007fa000) Stream removed, broadcasting: 5\n" Mar 30 14:00:49.235: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:00:49.235: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:00:49.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:00:49.473: INFO: stderr: "I0330 14:00:49.360458 2319 log.go:172] (0xc000a1e370) (0xc0009e4640) Create stream\nI0330 14:00:49.360514 2319 log.go:172] (0xc000a1e370) (0xc0009e4640) Stream added, broadcasting: 1\nI0330 14:00:49.363706 2319 log.go:172] (0xc000a1e370) Reply frame received for 1\nI0330 14:00:49.363766 2319 log.go:172] (0xc000a1e370) (0xc0008bc000) Create stream\nI0330 14:00:49.363800 2319 log.go:172] (0xc000a1e370) (0xc0008bc000) Stream added, broadcasting: 3\nI0330 14:00:49.365024 2319 log.go:172] (0xc000a1e370) Reply frame received for 3\nI0330 14:00:49.365071 2319 log.go:172] (0xc000a1e370) (0xc00033c1e0) Create stream\nI0330 14:00:49.365086 2319 log.go:172] (0xc000a1e370) (0xc00033c1e0) Stream added, broadcasting: 5\nI0330 14:00:49.366235 2319 log.go:172] (0xc000a1e370) Reply frame received for 5\nI0330 14:00:49.439475 2319 log.go:172] (0xc000a1e370) Data frame received for 5\nI0330 14:00:49.439498 2319 log.go:172] (0xc00033c1e0) (5) Data frame handling\nI0330 14:00:49.439510 2319 log.go:172] (0xc00033c1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:00:49.467086 2319 log.go:172] (0xc000a1e370) Data frame received for 3\nI0330 14:00:49.467120 2319 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0330 14:00:49.467135 2319 log.go:172] (0xc0008bc000) (3) Data frame sent\nI0330 14:00:49.467144 2319 log.go:172] (0xc000a1e370) Data frame received for 3\nI0330 14:00:49.467152 2319 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0330 14:00:49.467370 2319 log.go:172] (0xc000a1e370) Data frame received for 5\nI0330 14:00:49.467381 2319 log.go:172] (0xc00033c1e0) (5) Data frame handling\nI0330 14:00:49.469607 2319 log.go:172] (0xc000a1e370) Data frame received for 1\nI0330 14:00:49.469628 2319 log.go:172] (0xc0009e4640) (1) Data frame handling\nI0330 14:00:49.469639 2319 log.go:172] (0xc0009e4640) (1) Data frame sent\nI0330 14:00:49.469651 2319 log.go:172] (0xc000a1e370) (0xc0009e4640) Stream removed, broadcasting: 1\nI0330 14:00:49.469665 2319 log.go:172] (0xc000a1e370) Go away received\nI0330 14:00:49.469999 2319 log.go:172] (0xc000a1e370) (0xc0009e4640) Stream removed, broadcasting: 1\nI0330 14:00:49.470016 2319 log.go:172] (0xc000a1e370) (0xc0008bc000) Stream removed, broadcasting: 3\nI0330 14:00:49.470025 2319 log.go:172] (0xc000a1e370) (0xc00033c1e0) Stream removed, broadcasting: 5\n" Mar 30 14:00:49.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:00:49.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:00:49.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:00:49.701: INFO: stderr: "I0330 14:00:49.599839 2340 log.go:172] (0xc000116790) (0xc000728820) Create stream\nI0330 14:00:49.599914 2340 log.go:172] (0xc000116790) (0xc000728820) Stream added, broadcasting: 1\nI0330 14:00:49.603087 2340 log.go:172] (0xc000116790) Reply frame received for 1\nI0330 14:00:49.603122 2340 log.go:172] (0xc000116790) (0xc0007288c0) Create stream\nI0330 14:00:49.603131 2340 log.go:172] (0xc000116790) (0xc0007288c0) Stream added, broadcasting: 3\nI0330 14:00:49.604577 2340 log.go:172] (0xc000116790) Reply frame received for 3\nI0330 14:00:49.604619 2340 log.go:172] (0xc000116790) (0xc0005fc000) Create stream\nI0330 14:00:49.604641 2340 log.go:172] (0xc000116790) (0xc0005fc000) Stream added, broadcasting: 5\nI0330 14:00:49.606113 2340 log.go:172] (0xc000116790) Reply frame received for 5\nI0330 14:00:49.669263 2340 log.go:172] (0xc000116790) Data frame received for 5\nI0330 14:00:49.669323 2340 log.go:172] (0xc0005fc000) (5) Data frame handling\nI0330 14:00:49.669365 2340 log.go:172] (0xc0005fc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:00:49.695440 2340 log.go:172] (0xc000116790) Data frame received for 5\nI0330 14:00:49.695473 2340 log.go:172] (0xc0005fc000) (5) Data frame handling\nI0330 14:00:49.695495 2340 log.go:172] (0xc000116790) Data frame received for 3\nI0330 14:00:49.695503 2340 log.go:172] (0xc0007288c0) (3) Data frame handling\nI0330 14:00:49.695512 2340 log.go:172] (0xc0007288c0) (3) Data frame sent\nI0330 14:00:49.695519 2340 log.go:172] (0xc000116790) Data frame received for 3\nI0330 14:00:49.695524 2340 log.go:172] (0xc0007288c0) (3) Data frame handling\nI0330 14:00:49.697375 2340 log.go:172] (0xc000116790) Data frame received for 1\nI0330 14:00:49.697397 2340 log.go:172] (0xc000728820) (1) Data frame handling\nI0330 14:00:49.697419 2340 log.go:172] (0xc000728820) (1) Data frame sent\nI0330 14:00:49.697452 2340 log.go:172] (0xc000116790) (0xc000728820) Stream removed, broadcasting: 1\nI0330 14:00:49.697636 2340 log.go:172] (0xc000116790) Go away received\nI0330 14:00:49.697832 2340 log.go:172] (0xc000116790) (0xc000728820) Stream removed, broadcasting: 1\nI0330 14:00:49.697857 2340 log.go:172] (0xc000116790) (0xc0007288c0) Stream removed, broadcasting: 3\nI0330 14:00:49.697869 2340 log.go:172] (0xc000116790) (0xc0005fc000) Stream removed, broadcasting: 5\n" Mar 30 14:00:49.701: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:00:49.701: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:00:49.701: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:00:49.760: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 30 14:00:59.769: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:00:59.769: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:00:59.769: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:00:59.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999519s Mar 30 14:01:00.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993299048s Mar 30 14:01:01.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987193353s Mar 30 14:01:02.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981277045s Mar 30 14:01:03.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975796892s Mar 30 14:01:04.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970687068s Mar 30 14:01:05.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965990885s Mar 30 14:01:06.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96119691s Mar 30 14:01:07.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955502289s Mar 30 14:01:08.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.267265ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2169 Mar 30 14:01:09.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:01:10.100: INFO: stderr: "I0330 14:01:09.992320 2362 log.go:172] (0xc000a7c630) (0xc0004caa00) Create stream\nI0330 14:01:09.992384 2362 log.go:172] (0xc000a7c630) (0xc0004caa00) Stream added, broadcasting: 1\nI0330 14:01:09.995550 2362 log.go:172] (0xc000a7c630) Reply frame received for 1\nI0330 14:01:09.995601 2362 log.go:172] (0xc000a7c630) (0xc0004ca140) Create stream\nI0330 14:01:09.995639 2362 log.go:172] (0xc000a7c630) (0xc0004ca140) Stream added, broadcasting: 3\nI0330 14:01:09.996710 2362 log.go:172] (0xc000a7c630) Reply frame received for 3\nI0330 14:01:09.996746 2362 log.go:172] (0xc000a7c630) (0xc0004ca1e0) Create stream\nI0330 14:01:09.996756 2362 log.go:172] (0xc000a7c630) (0xc0004ca1e0) Stream added, broadcasting: 5\nI0330 14:01:09.998003 2362 log.go:172] (0xc000a7c630) Reply frame received for 5\nI0330 14:01:10.093577 2362 log.go:172] (0xc000a7c630) Data frame received for 3\nI0330 14:01:10.093617 2362 log.go:172] (0xc0004ca140) (3) Data frame handling\nI0330 14:01:10.093635 2362 log.go:172] (0xc0004ca140) (3) Data frame sent\nI0330 14:01:10.093654 2362 log.go:172] (0xc000a7c630) Data frame received for 3\nI0330 14:01:10.093671 2362 log.go:172] (0xc0004ca140) (3) Data frame handling\nI0330 14:01:10.093695 2362 log.go:172] (0xc000a7c630) Data frame received for 5\nI0330 14:01:10.093707 2362 log.go:172] (0xc0004ca1e0) (5) Data frame handling\nI0330 14:01:10.093718 2362 log.go:172] (0xc0004ca1e0) (5) Data frame sent\nI0330 14:01:10.093733 2362 log.go:172] (0xc000a7c630) Data frame received for 5\nI0330 14:01:10.093749 2362 log.go:172] (0xc0004ca1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 14:01:10.095139 2362 log.go:172] (0xc000a7c630) Data frame received for 1\nI0330 14:01:10.095169 2362 log.go:172] (0xc0004caa00) (1) Data frame handling\nI0330 14:01:10.095194 2362 log.go:172] (0xc0004caa00) (1) Data frame sent\nI0330 14:01:10.095245 2362 log.go:172] (0xc000a7c630) (0xc0004caa00) Stream removed, broadcasting: 1\nI0330 14:01:10.095462 2362 log.go:172] (0xc000a7c630) Go away received\nI0330 14:01:10.095784 2362 log.go:172] (0xc000a7c630) (0xc0004caa00) Stream removed, broadcasting: 1\nI0330 14:01:10.095813 2362 log.go:172] (0xc000a7c630) (0xc0004ca140) Stream removed, broadcasting: 3\nI0330 14:01:10.095833 2362 log.go:172] (0xc000a7c630) (0xc0004ca1e0) Stream removed, broadcasting: 5\n" Mar 30 14:01:10.100: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:01:10.100: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:01:10.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:01:10.310: INFO: stderr: "I0330 14:01:10.232530 2382 log.go:172] (0xc000646370) (0xc000322820) Create stream\nI0330 14:01:10.232594 2382 log.go:172] (0xc000646370) (0xc000322820) Stream added, broadcasting: 1\nI0330 14:01:10.235409 2382 log.go:172] (0xc000646370) Reply frame received for 1\nI0330 14:01:10.235449 2382 log.go:172] (0xc000646370) (0xc0003228c0) Create stream\nI0330 14:01:10.235459 2382 log.go:172] (0xc000646370) (0xc0003228c0) Stream added, broadcasting: 3\nI0330 14:01:10.236596 2382 log.go:172] (0xc000646370) Reply frame received for 3\nI0330 14:01:10.236637 2382 log.go:172] (0xc000646370) (0xc000768000) Create stream\nI0330 14:01:10.236652 2382 log.go:172] (0xc000646370) (0xc000768000) Stream added, broadcasting: 5\nI0330 14:01:10.237842 2382 log.go:172] (0xc000646370) Reply frame received for 5\nI0330 14:01:10.304004 2382 log.go:172] (0xc000646370) Data frame received for 5\nI0330 14:01:10.304051 2382 log.go:172] (0xc000768000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 14:01:10.304087 2382 log.go:172] (0xc000646370) Data frame received for 3\nI0330 14:01:10.304117 2382 log.go:172] (0xc0003228c0) (3) Data frame handling\nI0330 14:01:10.304129 2382 log.go:172] (0xc0003228c0) (3) Data frame sent\nI0330 14:01:10.304134 2382 log.go:172] (0xc000646370) Data frame received for 3\nI0330 14:01:10.304138 2382 log.go:172] (0xc0003228c0) (3) Data frame handling\nI0330 14:01:10.304183 2382 log.go:172] (0xc000768000) (5) Data frame sent\nI0330 14:01:10.304218 2382 log.go:172] (0xc000646370) Data frame received for 5\nI0330 14:01:10.304234 2382 log.go:172] (0xc000768000) (5) Data frame handling\nI0330 14:01:10.305928 2382 log.go:172] (0xc000646370) Data frame received for 1\nI0330 14:01:10.305950 2382 log.go:172] (0xc000322820) (1) Data frame handling\nI0330 14:01:10.305963 2382 log.go:172] (0xc000322820) (1) Data frame sent\nI0330 14:01:10.305977 2382 log.go:172] (0xc000646370) (0xc000322820) Stream removed, broadcasting: 1\nI0330 14:01:10.306139 2382 log.go:172] (0xc000646370) Go away received\nI0330 14:01:10.306450 2382 log.go:172] (0xc000646370) (0xc000322820) Stream removed, broadcasting: 1\nI0330 14:01:10.306474 2382 log.go:172] (0xc000646370) (0xc0003228c0) Stream removed, broadcasting: 3\nI0330 14:01:10.306490 2382 log.go:172] (0xc000646370) (0xc000768000) Stream removed, broadcasting: 5\n" Mar 30 14:01:10.310: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:01:10.310: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:01:10.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2169 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:01:10.533: INFO: stderr: "I0330 14:01:10.446301 2403 log.go:172] (0xc000acc630) (0xc0006aca00) Create stream\nI0330 14:01:10.446364 2403 log.go:172] (0xc000acc630) (0xc0006aca00) Stream added, broadcasting: 1\nI0330 14:01:10.453294 2403 log.go:172] (0xc000acc630) Reply frame received for 1\nI0330 14:01:10.453355 2403 log.go:172] (0xc000acc630) (0xc0006ac140) Create stream\nI0330 14:01:10.453374 2403 log.go:172] (0xc000acc630) (0xc0006ac140) Stream added, broadcasting: 3\nI0330 14:01:10.454527 2403 log.go:172] (0xc000acc630) Reply frame received for 3\nI0330 14:01:10.454563 2403 log.go:172] (0xc000acc630) (0xc0002b6000) Create stream\nI0330 14:01:10.454573 2403 log.go:172] (0xc000acc630) (0xc0002b6000) Stream added, broadcasting: 5\nI0330 14:01:10.455507 2403 log.go:172] (0xc000acc630) Reply frame received for 5\nI0330 14:01:10.527852 2403 log.go:172] (0xc000acc630) Data frame received for 5\nI0330 14:01:10.527903 2403 log.go:172] (0xc0002b6000) (5) Data frame handling\nI0330 14:01:10.527929 2403 log.go:172] (0xc0002b6000) (5) Data frame sent\nI0330 14:01:10.527946 2403 log.go:172] (0xc000acc630) Data frame received for 5\nI0330 14:01:10.527960 2403 log.go:172] (0xc0002b6000) (5) Data frame handling\nI0330 14:01:10.527980 2403 log.go:172] (0xc000acc630) Data frame received for 3\nI0330 14:01:10.527995 2403 log.go:172] (0xc0006ac140) (3) Data frame handling\nI0330 14:01:10.528011 2403 log.go:172] (0xc0006ac140) (3) Data frame sent\nI0330 14:01:10.528028 2403 log.go:172] (0xc000acc630) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 14:01:10.528046 2403 log.go:172] (0xc0006ac140) (3) Data frame handling\nI0330 14:01:10.529686 2403 log.go:172] (0xc000acc630) Data frame received for 1\nI0330 14:01:10.529712 2403 log.go:172] (0xc0006aca00) (1) Data frame handling\nI0330 14:01:10.529720 2403 log.go:172] (0xc0006aca00) (1) Data frame sent\nI0330 14:01:10.529729 2403 log.go:172] (0xc000acc630) (0xc0006aca00) Stream removed, broadcasting: 1\nI0330 14:01:10.529794 2403 log.go:172] (0xc000acc630) Go away received\nI0330 14:01:10.530117 2403 log.go:172] (0xc000acc630) (0xc0006aca00) Stream removed, broadcasting: 1\nI0330 14:01:10.530131 2403 log.go:172] (0xc000acc630) (0xc0006ac140) Stream removed, broadcasting: 3\nI0330 14:01:10.530136 2403 log.go:172] (0xc000acc630) (0xc0002b6000) Stream removed, broadcasting: 5\n" Mar 30 14:01:10.533: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:01:10.533: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:01:10.533: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 30 14:01:30.551: INFO: Deleting all statefulset in ns statefulset-2169 Mar 30 14:01:30.555: INFO: Scaling statefulset ss to 0 Mar 30 14:01:30.563: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:01:30.565: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:01:30.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2169" for this suite. Mar 30 14:01:36.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:01:36.710: INFO: namespace statefulset-2169 deletion completed in 6.130824512s • [SLOW TEST:88.431 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:01:36.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:01:36.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2" in namespace "projected-3839" to be "success or failure" Mar 30 14:01:36.783: INFO: Pod "downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935621ms Mar 30 14:01:38.802: INFO: Pod "downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02200577s Mar 30 14:01:40.821: INFO: Pod "downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040339176s STEP: Saw pod success Mar 30 14:01:40.821: INFO: Pod "downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2" satisfied condition "success or failure" Mar 30 14:01:40.824: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2 container client-container: STEP: delete the pod Mar 30 14:01:40.862: INFO: Waiting for pod downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2 to disappear Mar 30 14:01:40.886: INFO: Pod downwardapi-volume-892cb8eb-ebe2-4938-8ad5-6949c12169d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:01:40.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3839" for this suite. Mar 30 14:01:46.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:01:46.975: INFO: namespace projected-3839 deletion completed in 6.084777302s • [SLOW TEST:10.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:01:46.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 30 14:01:47.035: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:02:02.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2724" for this suite. Mar 30 14:02:08.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:02:08.326: INFO: namespace pods-2724 deletion completed in 6.103437263s • [SLOW TEST:21.351 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:02:08.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-52rf STEP: Creating a pod to test atomic-volume-subpath Mar 30 14:02:08.457: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-52rf" in namespace "subpath-4536" to be "success or failure" Mar 30 14:02:08.472: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.642911ms Mar 30 14:02:10.476: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018886101s Mar 30 14:02:12.480: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 4.022587265s Mar 30 14:02:14.484: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 6.026758754s Mar 30 14:02:16.488: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 8.031043969s Mar 30 14:02:18.492: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 10.035099447s Mar 30 14:02:20.497: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 12.039424348s Mar 30 14:02:22.501: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 14.043464307s Mar 30 14:02:24.505: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 16.048016153s Mar 30 14:02:26.509: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 18.051642406s Mar 30 14:02:28.513: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 20.056079054s Mar 30 14:02:30.517: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Running", Reason="", readiness=true. Elapsed: 22.059778932s Mar 30 14:02:32.520: INFO: Pod "pod-subpath-test-projected-52rf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063228445s STEP: Saw pod success Mar 30 14:02:32.520: INFO: Pod "pod-subpath-test-projected-52rf" satisfied condition "success or failure" Mar 30 14:02:32.527: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-52rf container test-container-subpath-projected-52rf: STEP: delete the pod Mar 30 14:02:32.557: INFO: Waiting for pod pod-subpath-test-projected-52rf to disappear Mar 30 14:02:32.581: INFO: Pod pod-subpath-test-projected-52rf no longer exists STEP: Deleting pod pod-subpath-test-projected-52rf Mar 30 14:02:32.582: INFO: Deleting pod "pod-subpath-test-projected-52rf" in namespace "subpath-4536" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:02:32.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4536" for this suite. Mar 30 14:02:38.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:02:38.693: INFO: namespace subpath-4536 deletion completed in 6.10592446s • [SLOW TEST:30.367 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:02:38.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 30 14:02:38.743: INFO: Waiting up to 5m0s for pod "downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c" in namespace "downward-api-7415" to be "success or failure" Mar 30 14:02:38.754: INFO: Pod "downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.455029ms Mar 30 14:02:40.757: INFO: Pod "downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013617033s Mar 30 14:02:42.761: INFO: Pod "downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01770815s STEP: Saw pod success Mar 30 14:02:42.761: INFO: Pod "downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c" satisfied condition "success or failure" Mar 30 14:02:42.765: INFO: Trying to get logs from node iruya-worker pod downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c container dapi-container: STEP: delete the pod Mar 30 14:02:42.797: INFO: Waiting for pod downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c to disappear Mar 30 14:02:42.814: INFO: Pod downward-api-fd8784ce-4312-4036-8f54-4a7ba2009f3c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:02:42.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7415" for this suite. Mar 30 14:02:48.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:02:48.907: INFO: namespace downward-api-7415 deletion completed in 6.089772004s • [SLOW TEST:10.214 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:02:48.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:02:55.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5999" for this suite. Mar 30 14:03:01.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:03:01.322: INFO: namespace namespaces-5999 deletion completed in 6.082309406s STEP: Destroying namespace "nsdeletetest-8623" for this suite. Mar 30 14:03:01.324: INFO: Namespace nsdeletetest-8623 was already deleted STEP: Destroying namespace "nsdeletetest-1889" for this suite. Mar 30 14:03:07.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:03:07.423: INFO: namespace nsdeletetest-1889 deletion completed in 6.098906505s • [SLOW TEST:18.515 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:03:07.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-bbe09405-fcdd-40ed-9cb7-dbfd92cc9045 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:03:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4054" for this suite. Mar 30 14:03:13.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:03:13.570: INFO: namespace secrets-4054 deletion completed in 6.084190845s • [SLOW TEST:6.147 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:03:13.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 30 14:03:21.678: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:21.683: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:23.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:23.687: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:25.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:25.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:27.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:27.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:29.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:29.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:31.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:31.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:33.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:33.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:35.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:35.687: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:37.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:37.687: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:39.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:39.687: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:41.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:41.688: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 14:03:43.683: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 14:03:43.687: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:03:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4871" for this suite. Mar 30 14:04:05.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:04:05.794: INFO: namespace container-lifecycle-hook-4871 deletion completed in 22.096450303s • [SLOW TEST:52.224 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:04:05.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0330 14:04:36.411151 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 14:04:36.411: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:04:36.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4978" for this suite. Mar 30 14:04:42.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:04:42.514: INFO: namespace gc-4978 deletion completed in 6.100150799s • [SLOW TEST:36.718 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:04:42.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 14:04:46.652: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:04:46.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9804" for this suite. Mar 30 14:04:52.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:04:52.763: INFO: namespace container-runtime-9804 deletion completed in 6.093795009s • [SLOW TEST:10.249 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:04:52.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:04:52.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37" in namespace "projected-9784" to be "success or failure" Mar 30 14:04:52.822: INFO: Pod "downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052656ms Mar 30 14:04:54.826: INFO: Pod "downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007417563s Mar 30 14:04:56.830: INFO: Pod "downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011882382s STEP: Saw pod success Mar 30 14:04:56.830: INFO: Pod "downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37" satisfied condition "success or failure" Mar 30 14:04:56.833: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37 container client-container: STEP: delete the pod Mar 30 14:04:56.855: INFO: Waiting for pod downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37 to disappear Mar 30 14:04:56.858: INFO: Pod downwardapi-volume-c38e6c11-650a-4c42-af73-cd5f83b73a37 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:04:56.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9784" for this suite. Mar 30 14:05:02.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:05:02.958: INFO: namespace projected-9784 deletion completed in 6.097394289s • [SLOW TEST:10.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:05:02.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 30 14:05:11.079: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:11.086: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:13.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:13.090: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:15.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:15.091: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:17.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:17.090: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:19.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:19.091: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:21.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:21.091: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 14:05:23.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 14:05:23.091: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:05:23.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3875" for this suite. Mar 30 14:05:45.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:05:45.190: INFO: namespace container-lifecycle-hook-3875 deletion completed in 22.094943908s • [SLOW TEST:42.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:05:45.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 30 14:05:45.249: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix825494989/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:05:45.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-38" for this suite. Mar 30 14:05:51.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:05:51.440: INFO: namespace kubectl-38 deletion completed in 6.123395479s • [SLOW TEST:6.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:05:51.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 14:05:51.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4058' Mar 30 14:05:51.597: INFO: stderr: "" Mar 30 14:05:51.597: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 30 14:05:51.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4058' Mar 30 14:06:01.860: INFO: stderr: "" Mar 30 14:06:01.860: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:06:01.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4058" for this suite. Mar 30 14:06:07.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:06:07.978: INFO: namespace kubectl-4058 deletion completed in 6.114436401s • [SLOW TEST:16.537 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:06:07.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 30 14:06:12.584: INFO: Successfully updated pod "pod-update-activedeadlineseconds-be2118b8-db38-4a08-9bf5-cee05159cb3f" Mar 30 14:06:12.584: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-be2118b8-db38-4a08-9bf5-cee05159cb3f" in namespace "pods-8633" to be "terminated due to deadline exceeded" Mar 30 14:06:12.587: INFO: Pod "pod-update-activedeadlineseconds-be2118b8-db38-4a08-9bf5-cee05159cb3f": Phase="Running", Reason="", readiness=true. Elapsed: 3.325719ms Mar 30 14:06:14.592: INFO: Pod "pod-update-activedeadlineseconds-be2118b8-db38-4a08-9bf5-cee05159cb3f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007739372s Mar 30 14:06:14.592: INFO: Pod "pod-update-activedeadlineseconds-be2118b8-db38-4a08-9bf5-cee05159cb3f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:06:14.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8633" for this suite. Mar 30 14:06:20.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:06:20.838: INFO: namespace pods-8633 deletion completed in 6.242362252s • [SLOW TEST:12.860 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:06:20.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 30 14:06:20.941: INFO: namespace kubectl-8895 Mar 30 14:06:20.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8895' Mar 30 14:06:21.249: INFO: stderr: "" Mar 30 14:06:21.249: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 30 14:06:22.270: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:06:22.270: INFO: Found 0 / 1 Mar 30 14:06:23.254: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:06:23.254: INFO: Found 0 / 1 Mar 30 14:06:24.254: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:06:24.254: INFO: Found 1 / 1 Mar 30 14:06:24.255: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 30 14:06:24.258: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:06:24.258: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 14:06:24.258: INFO: wait on redis-master startup in kubectl-8895 Mar 30 14:06:24.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-84bds redis-master --namespace=kubectl-8895' Mar 30 14:06:24.357: INFO: stderr: "" Mar 30 14:06:24.357: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Mar 14:06:23.623 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Mar 14:06:23.623 # Server started, Redis version 3.2.12\n1:M 30 Mar 14:06:23.623 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Mar 14:06:23.623 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 30 14:06:24.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8895' Mar 30 14:06:24.509: INFO: stderr: "" Mar 30 14:06:24.509: INFO: stdout: "service/rm2 exposed\n" Mar 30 14:06:24.519: INFO: Service rm2 in namespace kubectl-8895 found. STEP: exposing service Mar 30 14:06:26.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8895' Mar 30 14:06:26.689: INFO: stderr: "" Mar 30 14:06:26.689: INFO: stdout: "service/rm3 exposed\n" Mar 30 14:06:26.693: INFO: Service rm3 in namespace kubectl-8895 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:06:28.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8895" for this suite. Mar 30 14:06:50.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:06:50.841: INFO: namespace kubectl-8895 deletion completed in 22.137156183s • [SLOW TEST:30.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:06:50.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ae774aa3-036e-4400-9971-e2c9fb4110d7 STEP: Creating a pod to test consume configMaps Mar 30 14:06:50.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd" in namespace "projected-7973" to be "success or failure" Mar 30 14:06:50.934: INFO: Pod "pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.542924ms Mar 30 14:06:52.938: INFO: Pod "pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009615137s Mar 30 14:06:54.947: INFO: Pod "pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018824387s STEP: Saw pod success Mar 30 14:06:54.947: INFO: Pod "pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd" satisfied condition "success or failure" Mar 30 14:06:54.950: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd container projected-configmap-volume-test: STEP: delete the pod Mar 30 14:06:54.964: INFO: Waiting for pod pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd to disappear Mar 30 14:06:54.981: INFO: Pod pod-projected-configmaps-72c9988b-33d6-45d8-bad6-1b2482eb03fd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:06:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7973" for this suite. Mar 30 14:07:00.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:07:01.082: INFO: namespace projected-7973 deletion completed in 6.097383897s • [SLOW TEST:10.241 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:07:01.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 30 14:07:09.217: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:09.233: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:11.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:11.236: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:13.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:13.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:15.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:15.236: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:17.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:17.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:19.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:19.236: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:21.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:21.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:23.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:23.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:25.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:25.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:27.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:27.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:29.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:29.237: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:31.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:31.236: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 14:07:33.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 14:07:33.236: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:07:33.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4297" for this suite. Mar 30 14:07:55.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:07:55.359: INFO: namespace container-lifecycle-hook-4297 deletion completed in 22.119757528s • [SLOW TEST:54.277 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:07:55.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5c818c2d-78de-49e2-b289-bd9884bf2ce3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5c818c2d-78de-49e2-b289-bd9884bf2ce3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:09:21.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-590" for this suite. Mar 30 14:09:43.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:09:44.044: INFO: namespace projected-590 deletion completed in 22.10360347s • [SLOW TEST:108.684 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:09:44.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0330 14:09:54.142314 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 14:09:54.142: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:09:54.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-766" for this suite. Mar 30 14:10:00.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:10:00.243: INFO: namespace gc-766 deletion completed in 6.097136411s • [SLOW TEST:16.199 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:10:00.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 30 14:10:04.823: INFO: Successfully updated pod "annotationupdatee5173127-ce74-4c25-97f8-e373732f5d1b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:10:06.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7580" for this suite. Mar 30 14:10:28.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:10:28.926: INFO: namespace downward-api-7580 deletion completed in 22.083335972s • [SLOW TEST:28.682 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:10:28.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:10:29.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe" in namespace "projected-5312" to be "success or failure" Mar 30 14:10:29.008: INFO: Pod "downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.870914ms Mar 30 14:10:31.011: INFO: Pod "downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007754031s Mar 30 14:10:33.016: INFO: Pod "downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012095967s STEP: Saw pod success Mar 30 14:10:33.016: INFO: Pod "downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe" satisfied condition "success or failure" Mar 30 14:10:33.019: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe container client-container: STEP: delete the pod Mar 30 14:10:33.052: INFO: Waiting for pod downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe to disappear Mar 30 14:10:33.057: INFO: Pod downwardapi-volume-e0deb897-4893-454d-b544-a8e1c0e8c5fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:10:33.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5312" for this suite. Mar 30 14:10:39.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:10:39.164: INFO: namespace projected-5312 deletion completed in 6.102659238s • [SLOW TEST:10.237 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:10:39.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wffvf in namespace proxy-6681 I0330 14:10:39.232046 6 runners.go:180] Created replication controller with name: proxy-service-wffvf, namespace: proxy-6681, replica count: 1 I0330 14:10:40.282510 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 14:10:41.282711 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 14:10:42.282999 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 14:10:43.283235 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 14:10:44.283466 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 14:10:45.283697 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 14:10:46.283932 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 14:10:47.284194 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 14:10:48.284424 6 runners.go:180] proxy-service-wffvf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 14:10:48.288: INFO: setup took 9.091828708s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 30 14:10:48.308: INFO: (0) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 20.030514ms) Mar 30 14:10:48.309: INFO: (0) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 21.015183ms) Mar 30 14:10:48.309: INFO: (0) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 21.199689ms) Mar 30 14:10:48.310: INFO: (0) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 22.12465ms) Mar 30 14:10:48.310: INFO: (0) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 22.455577ms) Mar 30 14:10:48.310: INFO: (0) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 22.273226ms) Mar 30 14:10:48.311: INFO: (0) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 22.95864ms) Mar 30 14:10:48.311: INFO: (0) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 23.356763ms) Mar 30 14:10:48.311: INFO: (0) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 23.526313ms) Mar 30 14:10:48.311: INFO: (0) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 23.520251ms) Mar 30 14:10:48.311: INFO: (0) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 23.399745ms) Mar 30 14:10:48.318: INFO: (0) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 30.397964ms) Mar 30 14:10:48.319: INFO: (0) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 30.599378ms) Mar 30 14:10:48.319: INFO: (0) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 30.703372ms) Mar 30 14:10:48.319: INFO: (0) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 30.838124ms) Mar 30 14:10:48.319: INFO: (0) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 4.95525ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 5.083064ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 5.010558ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 5.322736ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 5.391764ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 5.385665ms) Mar 30 14:10:48.324: INFO: (1) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 5.322123ms) Mar 30 14:10:48.325: INFO: (1) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 5.70186ms) Mar 30 14:10:48.325: INFO: (1) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 6.133242ms) Mar 30 14:10:48.325: INFO: (1) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 6.298947ms) Mar 30 14:10:48.325: INFO: (1) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 6.341578ms) Mar 30 14:10:48.327: INFO: (1) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 7.499473ms) Mar 30 14:10:48.329: INFO: (2) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 2.603513ms) Mar 30 14:10:48.329: INFO: (2) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 3.8405ms) Mar 30 14:10:48.333: INFO: (2) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 6.689126ms) Mar 30 14:10:48.333: INFO: (2) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 6.79609ms) Mar 30 14:10:48.334: INFO: (2) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 7.037962ms) Mar 30 14:10:48.335: INFO: (2) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 7.799866ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 15.026158ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 15.035374ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 15.139381ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 15.087896ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 15.086393ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 15.133048ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 15.708229ms) Mar 30 14:10:48.342: INFO: (2) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 15.783108ms) Mar 30 14:10:48.343: INFO: (2) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 15.946527ms) Mar 30 14:10:48.345: INFO: (3) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 2.438604ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 2.986497ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.320499ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.493469ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.517168ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.574435ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.496029ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 3.683071ms) Mar 30 14:10:48.346: INFO: (3) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.764349ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.363563ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 4.438748ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 4.413932ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.500082ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.4173ms) Mar 30 14:10:48.347: INFO: (3) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.420523ms) Mar 30 14:10:48.350: INFO: (4) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 2.280942ms) Mar 30 14:10:48.351: INFO: (4) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.722164ms) Mar 30 14:10:48.351: INFO: (4) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.736919ms) Mar 30 14:10:48.351: INFO: (4) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.988331ms) Mar 30 14:10:48.351: INFO: (4) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.135844ms) Mar 30 14:10:48.351: INFO: (4) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 4.273766ms) Mar 30 14:10:48.353: INFO: (4) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 5.348759ms) Mar 30 14:10:48.353: INFO: (4) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 5.33744ms) Mar 30 14:10:48.353: INFO: (4) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 5.40307ms) Mar 30 14:10:48.353: INFO: (4) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 5.426944ms) Mar 30 14:10:48.356: INFO: (5) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.371521ms) Mar 30 14:10:48.357: INFO: (5) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.560966ms) Mar 30 14:10:48.357: INFO: (5) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 3.906836ms) Mar 30 14:10:48.357: INFO: (5) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.922254ms) Mar 30 14:10:48.357: INFO: (5) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 4.333297ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.817896ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 4.811049ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 4.800539ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 4.834114ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 5.165692ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 5.159574ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 5.12091ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 5.186186ms) Mar 30 14:10:48.358: INFO: (5) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 5.354004ms) Mar 30 14:10:48.361: INFO: (6) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 2.942875ms) Mar 30 14:10:48.361: INFO: (6) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.035374ms) Mar 30 14:10:48.361: INFO: (6) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.077084ms) Mar 30 14:10:48.362: INFO: (6) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.202553ms) Mar 30 14:10:48.362: INFO: (6) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 3.225734ms) Mar 30 14:10:48.362: INFO: (6) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.334377ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.601396ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 4.752753ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.867599ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 4.81249ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.893592ms) Mar 30 14:10:48.363: INFO: (6) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.806151ms) Mar 30 14:10:48.368: INFO: (7) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 4.206516ms) Mar 30 14:10:48.368: INFO: (7) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 4.20842ms) Mar 30 14:10:48.368: INFO: (7) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 4.197811ms) Mar 30 14:10:48.368: INFO: (7) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 4.214165ms) Mar 30 14:10:48.368: INFO: (7) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 2.978999ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.271636ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.357314ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.319123ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.406002ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.363301ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.406915ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.449802ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.407092ms) Mar 30 14:10:48.372: INFO: (8) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test (200; 2.963405ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.287762ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.286486ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.296458ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.369622ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.342166ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.423172ms) Mar 30 14:10:48.377: INFO: (9) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 4.891531ms) Mar 30 14:10:48.379: INFO: (9) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 5.542405ms) Mar 30 14:10:48.381: INFO: (9) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 7.568939ms) Mar 30 14:10:48.381: INFO: (9) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 7.581011ms) Mar 30 14:10:48.381: INFO: (9) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 7.682348ms) Mar 30 14:10:48.381: INFO: (9) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 7.610088ms) Mar 30 14:10:48.384: INFO: (10) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 2.980922ms) Mar 30 14:10:48.385: INFO: (10) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.387273ms) Mar 30 14:10:48.385: INFO: (10) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.508263ms) Mar 30 14:10:48.385: INFO: (10) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 4.802238ms) Mar 30 14:10:48.387: INFO: (10) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 5.080374ms) Mar 30 14:10:48.387: INFO: (10) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 5.040734ms) Mar 30 14:10:48.387: INFO: (10) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 5.143149ms) Mar 30 14:10:48.387: INFO: (10) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 5.147099ms) Mar 30 14:10:48.388: INFO: (10) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 6.656401ms) Mar 30 14:10:48.388: INFO: (10) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 6.771224ms) Mar 30 14:10:48.389: INFO: (10) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 7.145245ms) Mar 30 14:10:48.389: INFO: (10) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 7.080362ms) Mar 30 14:10:48.389: INFO: (10) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 7.163306ms) Mar 30 14:10:48.389: INFO: (10) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 7.190386ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.847504ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.774095ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.770792ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.853325ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.904641ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 4.103187ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 4.161355ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 4.117939ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 4.629529ms) Mar 30 14:10:48.393: INFO: (11) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.749021ms) Mar 30 14:10:48.394: INFO: (11) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.776225ms) Mar 30 14:10:48.394: INFO: (11) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 5.006735ms) Mar 30 14:10:48.394: INFO: (11) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 5.02817ms) Mar 30 14:10:48.394: INFO: (11) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 5.024279ms) Mar 30 14:10:48.394: INFO: (11) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 5.308541ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.693447ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.716857ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.724279ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.815587ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.73197ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.810278ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.838172ms) Mar 30 14:10:48.398: INFO: (12) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test<... (200; 5.592738ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 5.717339ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 5.803641ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 5.770085ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 6.111325ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 6.207513ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 6.165555ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 6.133227ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 6.188395ms) Mar 30 14:10:48.406: INFO: (13) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 6.171401ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.285187ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.432655ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.472187ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.547604ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.518761ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.662268ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.619245ms) Mar 30 14:10:48.410: INFO: (14) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test (200; 4.705677ms) Mar 30 14:10:48.411: INFO: (14) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.724084ms) Mar 30 14:10:48.411: INFO: (14) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.7955ms) Mar 30 14:10:48.411: INFO: (14) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.85261ms) Mar 30 14:10:48.414: INFO: (15) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 2.234752ms) Mar 30 14:10:48.415: INFO: (15) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.002884ms) Mar 30 14:10:48.415: INFO: (15) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 3.630229ms) Mar 30 14:10:48.415: INFO: (15) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.945963ms) Mar 30 14:10:48.416: INFO: (15) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.853636ms) Mar 30 14:10:48.416: INFO: (15) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.915683ms) Mar 30 14:10:48.416: INFO: (15) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.908047ms) Mar 30 14:10:48.416: INFO: (15) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 4.145626ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 5.232704ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 5.212565ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 5.174061ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 5.211594ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 5.282397ms) Mar 30 14:10:48.417: INFO: (15) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 5.388026ms) Mar 30 14:10:48.420: INFO: (16) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 3.776666ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.72859ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.896665ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.834781ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.905461ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.323816ms) Mar 30 14:10:48.422: INFO: (16) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 4.456599ms) Mar 30 14:10:48.421: INFO: (16) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.369025ms) Mar 30 14:10:48.422: INFO: (16) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.399809ms) Mar 30 14:10:48.422: INFO: (16) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.711675ms) Mar 30 14:10:48.422: INFO: (16) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 4.760833ms) Mar 30 14:10:48.422: INFO: (16) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 5.113385ms) Mar 30 14:10:48.425: INFO: (17) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 2.807586ms) Mar 30 14:10:48.425: INFO: (17) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: ... (200; 4.303746ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 4.21789ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 4.464061ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 4.513173ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 4.628299ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 4.833334ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.879459ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.8567ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.833941ms) Mar 30 14:10:48.427: INFO: (17) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 4.966708ms) Mar 30 14:10:48.431: INFO: (18) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.681179ms) Mar 30 14:10:48.431: INFO: (18) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.672529ms) Mar 30 14:10:48.431: INFO: (18) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: test (200; 4.478872ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname2/proxy/: bar (200; 4.496296ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname2/proxy/: tls qux (200; 4.892748ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 4.86297ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname1/proxy/: foo (200; 4.994525ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 4.958495ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 4.991158ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/services/http:proxy-service-wffvf:portname2/proxy/: bar (200; 4.947673ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 4.922205ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 4.991867ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 4.994908ms) Mar 30 14:10:48.432: INFO: (18) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 5.020629ms) Mar 30 14:10:48.433: INFO: (18) /api/v1/namespaces/proxy-6681/services/proxy-service-wffvf:portname1/proxy/: foo (200; 5.17356ms) Mar 30 14:10:48.436: INFO: (19) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.096113ms) Mar 30 14:10:48.436: INFO: (19) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:1080/proxy/: ... (200; 3.10855ms) Mar 30 14:10:48.436: INFO: (19) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2/proxy/: test (200; 3.247272ms) Mar 30 14:10:48.436: INFO: (19) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:162/proxy/: bar (200; 3.524488ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/pods/http:proxy-service-wffvf-7lwh2:160/proxy/: foo (200; 3.824445ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:460/proxy/: tls baz (200; 3.801856ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:462/proxy/: tls qux (200; 3.882329ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/pods/proxy-service-wffvf-7lwh2:1080/proxy/: test<... (200; 3.852315ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/services/https:proxy-service-wffvf:tlsportname1/proxy/: tls baz (200; 3.930382ms) Mar 30 14:10:48.437: INFO: (19) /api/v1/namespaces/proxy-6681/pods/https:proxy-service-wffvf-7lwh2:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1fcf8c54-6ae7-4180-8eaf-5c4ea4b9df44 STEP: Creating a pod to test consume secrets Mar 30 14:11:08.429: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac" in namespace "projected-6596" to be "success or failure" Mar 30 14:11:08.433: INFO: Pod "pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184592ms Mar 30 14:11:10.451: INFO: Pod "pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021574191s Mar 30 14:11:12.456: INFO: Pod "pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026377536s STEP: Saw pod success Mar 30 14:11:12.456: INFO: Pod "pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac" satisfied condition "success or failure" Mar 30 14:11:12.459: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac container projected-secret-volume-test: STEP: delete the pod Mar 30 14:11:12.541: INFO: Waiting for pod pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac to disappear Mar 30 14:11:12.543: INFO: Pod pod-projected-secrets-54bb0b92-09d5-4e4f-9ac5-606f753e95ac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:11:12.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6596" for this suite. Mar 30 14:11:18.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:11:18.678: INFO: namespace projected-6596 deletion completed in 6.092411773s • [SLOW TEST:10.369 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:11:18.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 30 14:11:25.609: INFO: 0 pods remaining Mar 30 14:11:25.609: INFO: 0 pods has nil DeletionTimestamp Mar 30 14:11:25.609: INFO: Mar 30 14:11:26.029: INFO: 0 pods remaining Mar 30 14:11:26.029: INFO: 0 pods has nil DeletionTimestamp Mar 30 14:11:26.029: INFO: STEP: Gathering metrics W0330 14:11:27.117057 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 14:11:27.117: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:11:27.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4806" for this suite. Mar 30 14:11:33.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:11:33.222: INFO: namespace gc-4806 deletion completed in 6.102330925s • [SLOW TEST:14.543 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:11:33.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 30 14:11:33.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9447' Mar 30 14:11:35.523: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 14:11:35.523: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 30 14:11:35.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9447' Mar 30 14:11:35.661: INFO: stderr: "" Mar 30 14:11:35.661: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:11:35.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9447" for this suite. Mar 30 14:11:41.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:11:41.750: INFO: namespace kubectl-9447 deletion completed in 6.085541068s • [SLOW TEST:8.527 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:11:41.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 30 14:11:46.510: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4904 pod-service-account-88c58721-3c1e-46e7-a3f0-3aac39a12ed8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 30 14:11:46.751: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4904 pod-service-account-88c58721-3c1e-46e7-a3f0-3aac39a12ed8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 30 14:11:46.960: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4904 pod-service-account-88c58721-3c1e-46e7-a3f0-3aac39a12ed8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:11:47.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4904" for this suite. Mar 30 14:11:53.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:11:53.270: INFO: namespace svcaccounts-4904 deletion completed in 6.092174199s • [SLOW TEST:11.519 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:11:53.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:11:53.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9715" for this suite. Mar 30 14:12:15.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:12:15.548: INFO: namespace pods-9715 deletion completed in 22.167637352s • [SLOW TEST:22.277 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:12:15.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 30 14:12:15.649: INFO: Waiting up to 5m0s for pod "pod-a24dc59b-1181-478b-9a89-5370cc1cb398" in namespace "emptydir-659" to be "success or failure" Mar 30 14:12:15.681: INFO: Pod "pod-a24dc59b-1181-478b-9a89-5370cc1cb398": Phase="Pending", Reason="", readiness=false. Elapsed: 32.1171ms Mar 30 14:12:17.685: INFO: Pod "pod-a24dc59b-1181-478b-9a89-5370cc1cb398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035852495s Mar 30 14:12:19.709: INFO: Pod "pod-a24dc59b-1181-478b-9a89-5370cc1cb398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060280407s STEP: Saw pod success Mar 30 14:12:19.709: INFO: Pod "pod-a24dc59b-1181-478b-9a89-5370cc1cb398" satisfied condition "success or failure" Mar 30 14:12:19.713: INFO: Trying to get logs from node iruya-worker2 pod pod-a24dc59b-1181-478b-9a89-5370cc1cb398 container test-container: STEP: delete the pod Mar 30 14:12:19.732: INFO: Waiting for pod pod-a24dc59b-1181-478b-9a89-5370cc1cb398 to disappear Mar 30 14:12:19.736: INFO: Pod pod-a24dc59b-1181-478b-9a89-5370cc1cb398 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:12:19.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-659" for this suite. Mar 30 14:12:25.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:12:25.832: INFO: namespace emptydir-659 deletion completed in 6.092470917s • [SLOW TEST:10.282 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:12:25.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 30 14:12:25.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8999' Mar 30 14:12:26.143: INFO: stderr: "" Mar 30 14:12:26.143: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 30 14:12:27.148: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:27.148: INFO: Found 0 / 1 Mar 30 14:12:28.290: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:28.290: INFO: Found 0 / 1 Mar 30 14:12:29.148: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:29.148: INFO: Found 0 / 1 Mar 30 14:12:30.148: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:30.148: INFO: Found 1 / 1 Mar 30 14:12:30.148: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 30 14:12:30.152: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:30.152: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 14:12:30.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-c89sq --namespace=kubectl-8999 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 30 14:12:30.244: INFO: stderr: "" Mar 30 14:12:30.244: INFO: stdout: "pod/redis-master-c89sq patched\n" STEP: checking annotations Mar 30 14:12:30.268: INFO: Selector matched 1 pods for map[app:redis] Mar 30 14:12:30.268: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:12:30.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8999" for this suite. Mar 30 14:12:52.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:12:52.408: INFO: namespace kubectl-8999 deletion completed in 22.136724603s • [SLOW TEST:26.577 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:12:52.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d28034a5-8e01-4eb0-8575-b0ea3fd06356 STEP: Creating a pod to test consume secrets Mar 30 14:12:52.500: INFO: Waiting up to 5m0s for pod "pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b" in namespace "secrets-4139" to be "success or failure" Mar 30 14:12:52.503: INFO: Pod "pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.816687ms Mar 30 14:12:54.507: INFO: Pod "pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007477123s Mar 30 14:12:56.520: INFO: Pod "pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02008134s STEP: Saw pod success Mar 30 14:12:56.520: INFO: Pod "pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b" satisfied condition "success or failure" Mar 30 14:12:56.522: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b container secret-volume-test: STEP: delete the pod Mar 30 14:12:56.561: INFO: Waiting for pod pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b to disappear Mar 30 14:12:56.566: INFO: Pod pod-secrets-138e9df3-4e5b-48e2-b104-5c60ecb0a65b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:12:56.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4139" for this suite. Mar 30 14:13:02.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:02.684: INFO: namespace secrets-4139 deletion completed in 6.114296376s • [SLOW TEST:10.275 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:02.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2b0c0ade-50bd-4b03-b10b-7edb1c604cdb STEP: Creating a pod to test consume secrets Mar 30 14:13:02.811: INFO: Waiting up to 5m0s for pod "pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80" in namespace "secrets-6020" to be "success or failure" Mar 30 14:13:02.821: INFO: Pod "pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80": Phase="Pending", Reason="", readiness=false. Elapsed: 9.979304ms Mar 30 14:13:04.826: INFO: Pod "pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01433358s Mar 30 14:13:06.830: INFO: Pod "pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018269381s STEP: Saw pod success Mar 30 14:13:06.830: INFO: Pod "pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80" satisfied condition "success or failure" Mar 30 14:13:06.832: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80 container secret-env-test: STEP: delete the pod Mar 30 14:13:06.854: INFO: Waiting for pod pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80 to disappear Mar 30 14:13:06.857: INFO: Pod pod-secrets-b8b30e62-8db5-4333-bf05-6bade1a05f80 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:06.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6020" for this suite. Mar 30 14:13:12.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:12.953: INFO: namespace secrets-6020 deletion completed in 6.093614238s • [SLOW TEST:10.269 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:12.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f3a07677-7c1e-43a6-b49e-0081881385df STEP: Creating a pod to test consume secrets Mar 30 14:13:13.039: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538" in namespace "projected-5862" to be "success or failure" Mar 30 14:13:13.043: INFO: Pod "pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164959ms Mar 30 14:13:15.047: INFO: Pod "pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008248261s Mar 30 14:13:17.052: INFO: Pod "pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013088614s STEP: Saw pod success Mar 30 14:13:17.052: INFO: Pod "pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538" satisfied condition "success or failure" Mar 30 14:13:17.055: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538 container projected-secret-volume-test: STEP: delete the pod Mar 30 14:13:17.071: INFO: Waiting for pod pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538 to disappear Mar 30 14:13:17.076: INFO: Pod pod-projected-secrets-0504c587-f010-4026-b9a7-6e2d426da538 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:17.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5862" for this suite. Mar 30 14:13:23.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:23.235: INFO: namespace projected-5862 deletion completed in 6.156935003s • [SLOW TEST:10.281 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:23.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c2711190-7f4f-429c-b78b-bf43c0d9b986 STEP: Creating a pod to test consume configMaps Mar 30 14:13:23.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5" in namespace "configmap-766" to be "success or failure" Mar 30 14:13:23.304: INFO: Pod "pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153863ms Mar 30 14:13:25.308: INFO: Pod "pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01203567s Mar 30 14:13:27.312: INFO: Pod "pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016202364s STEP: Saw pod success Mar 30 14:13:27.312: INFO: Pod "pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5" satisfied condition "success or failure" Mar 30 14:13:27.314: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5 container configmap-volume-test: STEP: delete the pod Mar 30 14:13:27.348: INFO: Waiting for pod pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5 to disappear Mar 30 14:13:27.363: INFO: Pod pod-configmaps-6ed6d4b5-7a04-4d92-9f69-0cabf5d763b5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:27.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-766" for this suite. Mar 30 14:13:33.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:33.477: INFO: namespace configmap-766 deletion completed in 6.110013266s • [SLOW TEST:10.241 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:33.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-259a75da-bc81-4562-a4a8-96295d4c924e STEP: Creating a pod to test consume configMaps Mar 30 14:13:33.539: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e" in namespace "projected-6778" to be "success or failure" Mar 30 14:13:33.584: INFO: Pod "pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.417613ms Mar 30 14:13:35.621: INFO: Pod "pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081755948s Mar 30 14:13:37.631: INFO: Pod "pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091850467s STEP: Saw pod success Mar 30 14:13:37.631: INFO: Pod "pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e" satisfied condition "success or failure" Mar 30 14:13:37.634: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e container projected-configmap-volume-test: STEP: delete the pod Mar 30 14:13:37.680: INFO: Waiting for pod pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e to disappear Mar 30 14:13:37.690: INFO: Pod pod-projected-configmaps-020791d2-a76b-4645-bbc7-f59a91d8586e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:37.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6778" for this suite. Mar 30 14:13:43.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:43.789: INFO: namespace projected-6778 deletion completed in 6.0938966s • [SLOW TEST:10.311 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:43.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:13:43.872: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 30 14:13:48.876: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 14:13:48.876: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 30 14:13:48.908: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8603,SelfLink:/apis/apps/v1/namespaces/deployment-8603/deployments/test-cleanup-deployment,UID:fc544c3d-6678-4b08-8de9-0cafeacfb0e1,ResourceVersion:2687431,Generation:1,CreationTimestamp:2020-03-30 14:13:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 30 14:13:48.925: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8603,SelfLink:/apis/apps/v1/namespaces/deployment-8603/replicasets/test-cleanup-deployment-55bbcbc84c,UID:6664c167-c98a-4c2d-ae5a-206377afd28c,ResourceVersion:2687433,Generation:1,CreationTimestamp:2020-03-30 14:13:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fc544c3d-6678-4b08-8de9-0cafeacfb0e1 0xc0028b6067 0xc0028b6068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 30 14:13:48.926: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 30 14:13:48.926: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8603,SelfLink:/apis/apps/v1/namespaces/deployment-8603/replicasets/test-cleanup-controller,UID:b95b32f6-8607-4807-9d9b-4835402bed7e,ResourceVersion:2687432,Generation:1,CreationTimestamp:2020-03-30 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fc544c3d-6678-4b08-8de9-0cafeacfb0e1 0xc002efbf97 0xc002efbf98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 30 14:13:48.992: INFO: Pod "test-cleanup-controller-cb7cm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-cb7cm,GenerateName:test-cleanup-controller-,Namespace:deployment-8603,SelfLink:/api/v1/namespaces/deployment-8603/pods/test-cleanup-controller-cb7cm,UID:0766cf92-8478-4d30-a300-b6d83d8ac718,ResourceVersion:2687426,Generation:0,CreationTimestamp:2020-03-30 14:13:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller b95b32f6-8607-4807-9d9b-4835402bed7e 0xc002870d77 0xc002870d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mpjq5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mpjq5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mpjq5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002870df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002870e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:13:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:13:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:13:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:13:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.142,StartTime:2020-03-30 14:13:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-30 14:13:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2a1043e915b7280a0164b2f79039653595f2db08625b4adc03e7bbed0eb7e725}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 30 14:13:48.993: INFO: Pod "test-cleanup-deployment-55bbcbc84c-5dkr8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-5dkr8,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8603,SelfLink:/api/v1/namespaces/deployment-8603/pods/test-cleanup-deployment-55bbcbc84c-5dkr8,UID:1bdd7f87-891b-4b0e-9cc4-afc4bc68d53a,ResourceVersion:2687437,Generation:0,CreationTimestamp:2020-03-30 14:13:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 6664c167-c98a-4c2d-ae5a-206377afd28c 0xc002870ef7 0xc002870ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mpjq5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mpjq5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mpjq5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002870f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002870f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:13:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:48.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8603" for this suite. Mar 30 14:13:55.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:13:55.101: INFO: namespace deployment-8603 deletion completed in 6.096055925s • [SLOW TEST:11.312 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:13:55.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 30 14:13:55.174: INFO: Waiting up to 5m0s for pod "downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4" in namespace "downward-api-9240" to be "success or failure" Mar 30 14:13:55.184: INFO: Pod "downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.998774ms Mar 30 14:13:57.188: INFO: Pod "downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013692158s Mar 30 14:13:59.193: INFO: Pod "downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018455993s STEP: Saw pod success Mar 30 14:13:59.193: INFO: Pod "downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4" satisfied condition "success or failure" Mar 30 14:13:59.196: INFO: Trying to get logs from node iruya-worker pod downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4 container dapi-container: STEP: delete the pod Mar 30 14:13:59.228: INFO: Waiting for pod downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4 to disappear Mar 30 14:13:59.238: INFO: Pod downward-api-2a61a411-e518-49c8-b23f-d927af4a39a4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:13:59.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9240" for this suite. Mar 30 14:14:05.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:14:05.394: INFO: namespace downward-api-9240 deletion completed in 6.152300001s • [SLOW TEST:10.292 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:14:05.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:14:05.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7" in namespace "downward-api-382" to be "success or failure" Mar 30 14:14:05.466: INFO: Pod "downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.96524ms Mar 30 14:14:07.471: INFO: Pod "downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008510142s Mar 30 14:14:09.475: INFO: Pod "downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012998802s STEP: Saw pod success Mar 30 14:14:09.475: INFO: Pod "downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7" satisfied condition "success or failure" Mar 30 14:14:09.478: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7 container client-container: STEP: delete the pod Mar 30 14:14:09.506: INFO: Waiting for pod downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7 to disappear Mar 30 14:14:09.517: INFO: Pod downwardapi-volume-956c1c1a-126d-4f5a-b5d3-ec9b460007b7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:14:09.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-382" for this suite. Mar 30 14:14:15.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:14:15.628: INFO: namespace downward-api-382 deletion completed in 6.106964102s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:14:15.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:14:19.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8793" for this suite. Mar 30 14:14:57.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:14:57.826: INFO: namespace kubelet-test-8793 deletion completed in 38.117256668s • [SLOW TEST:42.198 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:14:57.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:14:57.884: INFO: Creating ReplicaSet my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9 Mar 30 14:14:57.903: INFO: Pod name my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9: Found 0 pods out of 1 Mar 30 14:15:02.908: INFO: Pod name my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9: Found 1 pods out of 1 Mar 30 14:15:02.908: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9" is running Mar 30 14:15:02.911: INFO: Pod "my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9-hcsql" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 14:14:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 14:15:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 14:15:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 14:14:57 +0000 UTC Reason: Message:}]) Mar 30 14:15:02.911: INFO: Trying to dial the pod Mar 30 14:15:07.922: INFO: Controller my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9: Got expected result from replica 1 [my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9-hcsql]: "my-hostname-basic-69cfd2c9-846c-4294-bd76-9a51608d65b9-hcsql", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:15:07.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4956" for this suite. Mar 30 14:15:13.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:15:14.022: INFO: namespace replicaset-4956 deletion completed in 6.095746432s • [SLOW TEST:16.195 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:15:14.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4980 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4980 to expose endpoints map[] Mar 30 14:15:14.138: INFO: Get endpoints failed (13.656271ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 30 14:15:15.262: INFO: successfully validated that service multi-endpoint-test in namespace services-4980 exposes endpoints map[] (1.137754732s elapsed) STEP: Creating pod pod1 in namespace services-4980 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4980 to expose endpoints map[pod1:[100]] Mar 30 14:15:18.379: INFO: successfully validated that service multi-endpoint-test in namespace services-4980 exposes endpoints map[pod1:[100]] (3.110559827s elapsed) STEP: Creating pod pod2 in namespace services-4980 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4980 to expose endpoints map[pod1:[100] pod2:[101]] Mar 30 14:15:21.430: INFO: successfully validated that service multi-endpoint-test in namespace services-4980 exposes endpoints map[pod1:[100] pod2:[101]] (3.046812922s elapsed) STEP: Deleting pod pod1 in namespace services-4980 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4980 to expose endpoints map[pod2:[101]] Mar 30 14:15:22.509: INFO: successfully validated that service multi-endpoint-test in namespace services-4980 exposes endpoints map[pod2:[101]] (1.073485079s elapsed) STEP: Deleting pod pod2 in namespace services-4980 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4980 to expose endpoints map[] Mar 30 14:15:23.548: INFO: successfully validated that service multi-endpoint-test in namespace services-4980 exposes endpoints map[] (1.033555518s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:15:23.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4980" for this suite. Mar 30 14:15:45.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:15:45.746: INFO: namespace services-4980 deletion completed in 22.143894213s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.724 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:15:45.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 30 14:15:45.804: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 14:15:45.817: INFO: Waiting for terminating namespaces to be deleted... Mar 30 14:15:45.820: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 30 14:15:45.827: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.827: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 14:15:45.827: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.827: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 14:15:45.827: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 30 14:15:45.832: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.832: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 14:15:45.832: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.832: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 14:15:45.832: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.832: INFO: Container coredns ready: true, restart count 0 Mar 30 14:15:45.833: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 30 14:15:45.833: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16011a8f21407092], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:15:46.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7858" for this suite. Mar 30 14:15:52.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:15:52.954: INFO: namespace sched-pred-7858 deletion completed in 6.09493405s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.208 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:15:52.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:15:53.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c" in namespace "projected-3992" to be "success or failure" Mar 30 14:15:53.032: INFO: Pod "downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.285739ms Mar 30 14:15:55.037: INFO: Pod "downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023781587s Mar 30 14:15:57.042: INFO: Pod "downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028645981s STEP: Saw pod success Mar 30 14:15:57.042: INFO: Pod "downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c" satisfied condition "success or failure" Mar 30 14:15:57.045: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c container client-container: STEP: delete the pod Mar 30 14:15:57.079: INFO: Waiting for pod downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c to disappear Mar 30 14:15:57.101: INFO: Pod downwardapi-volume-def81a20-873f-47e7-8165-3f1f64ba072c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:15:57.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3992" for this suite. Mar 30 14:16:03.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:16:03.185: INFO: namespace projected-3992 deletion completed in 6.080490597s • [SLOW TEST:10.230 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:16:03.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 30 14:16:03.862: INFO: Pod name wrapped-volume-race-404440eb-73f5-4b81-b9cc-cd61416e6f1b: Found 0 pods out of 5 Mar 30 14:16:08.869: INFO: Pod name wrapped-volume-race-404440eb-73f5-4b81-b9cc-cd61416e6f1b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-404440eb-73f5-4b81-b9cc-cd61416e6f1b in namespace emptydir-wrapper-6940, will wait for the garbage collector to delete the pods Mar 30 14:16:22.958: INFO: Deleting ReplicationController wrapped-volume-race-404440eb-73f5-4b81-b9cc-cd61416e6f1b took: 7.017445ms Mar 30 14:16:23.259: INFO: Terminating ReplicationController wrapped-volume-race-404440eb-73f5-4b81-b9cc-cd61416e6f1b pods took: 300.275932ms STEP: Creating RC which spawns configmap-volume pods Mar 30 14:17:03.291: INFO: Pod name wrapped-volume-race-e5c0226c-0e7a-42b2-a498-a557e99c70e0: Found 0 pods out of 5 Mar 30 14:17:08.299: INFO: Pod name wrapped-volume-race-e5c0226c-0e7a-42b2-a498-a557e99c70e0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e5c0226c-0e7a-42b2-a498-a557e99c70e0 in namespace emptydir-wrapper-6940, will wait for the garbage collector to delete the pods Mar 30 14:17:22.374: INFO: Deleting ReplicationController wrapped-volume-race-e5c0226c-0e7a-42b2-a498-a557e99c70e0 took: 7.465082ms Mar 30 14:17:22.674: INFO: Terminating ReplicationController wrapped-volume-race-e5c0226c-0e7a-42b2-a498-a557e99c70e0 pods took: 300.255694ms STEP: Creating RC which spawns configmap-volume pods Mar 30 14:18:02.303: INFO: Pod name wrapped-volume-race-5efa83d4-f08c-4094-b860-b4dec6146c96: Found 0 pods out of 5 Mar 30 14:18:07.309: INFO: Pod name wrapped-volume-race-5efa83d4-f08c-4094-b860-b4dec6146c96: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5efa83d4-f08c-4094-b860-b4dec6146c96 in namespace emptydir-wrapper-6940, will wait for the garbage collector to delete the pods Mar 30 14:18:21.392: INFO: Deleting ReplicationController wrapped-volume-race-5efa83d4-f08c-4094-b860-b4dec6146c96 took: 7.225219ms Mar 30 14:18:21.693: INFO: Terminating ReplicationController wrapped-volume-race-5efa83d4-f08c-4094-b860-b4dec6146c96 pods took: 300.507098ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:19:03.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6940" for this suite. Mar 30 14:19:11.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:19:11.846: INFO: namespace emptydir-wrapper-6940 deletion completed in 8.084824056s • [SLOW TEST:188.661 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:19:11.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-k8rc STEP: Creating a pod to test atomic-volume-subpath Mar 30 14:19:11.957: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k8rc" in namespace "subpath-8058" to be "success or failure" Mar 30 14:19:11.966: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.583765ms Mar 30 14:19:13.970: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012910032s Mar 30 14:19:15.974: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 4.016955196s Mar 30 14:19:17.978: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 6.021152614s Mar 30 14:19:19.982: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 8.025535808s Mar 30 14:19:21.986: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 10.028912079s Mar 30 14:19:23.990: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 12.03333343s Mar 30 14:19:25.995: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 14.037596183s Mar 30 14:19:27.999: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 16.042283933s Mar 30 14:19:30.003: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 18.046560853s Mar 30 14:19:32.007: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 20.050316791s Mar 30 14:19:34.012: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Running", Reason="", readiness=true. Elapsed: 22.054905797s Mar 30 14:19:36.016: INFO: Pod "pod-subpath-test-configmap-k8rc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059324666s STEP: Saw pod success Mar 30 14:19:36.016: INFO: Pod "pod-subpath-test-configmap-k8rc" satisfied condition "success or failure" Mar 30 14:19:36.020: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-k8rc container test-container-subpath-configmap-k8rc: STEP: delete the pod Mar 30 14:19:36.057: INFO: Waiting for pod pod-subpath-test-configmap-k8rc to disappear Mar 30 14:19:36.061: INFO: Pod pod-subpath-test-configmap-k8rc no longer exists STEP: Deleting pod pod-subpath-test-configmap-k8rc Mar 30 14:19:36.061: INFO: Deleting pod "pod-subpath-test-configmap-k8rc" in namespace "subpath-8058" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:19:36.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8058" for this suite. Mar 30 14:19:42.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:19:42.156: INFO: namespace subpath-8058 deletion completed in 6.089944719s • [SLOW TEST:30.310 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:19:42.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 30 14:19:42.200: INFO: Waiting up to 5m0s for pod "pod-521a6b0a-a616-49f1-84dc-7128746de2cf" in namespace "emptydir-3752" to be "success or failure" Mar 30 14:19:42.260: INFO: Pod "pod-521a6b0a-a616-49f1-84dc-7128746de2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 59.650273ms Mar 30 14:19:44.264: INFO: Pod "pod-521a6b0a-a616-49f1-84dc-7128746de2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063369444s Mar 30 14:19:46.268: INFO: Pod "pod-521a6b0a-a616-49f1-84dc-7128746de2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067489585s STEP: Saw pod success Mar 30 14:19:46.268: INFO: Pod "pod-521a6b0a-a616-49f1-84dc-7128746de2cf" satisfied condition "success or failure" Mar 30 14:19:46.271: INFO: Trying to get logs from node iruya-worker pod pod-521a6b0a-a616-49f1-84dc-7128746de2cf container test-container: STEP: delete the pod Mar 30 14:19:46.304: INFO: Waiting for pod pod-521a6b0a-a616-49f1-84dc-7128746de2cf to disappear Mar 30 14:19:46.318: INFO: Pod pod-521a6b0a-a616-49f1-84dc-7128746de2cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:19:46.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3752" for this suite. Mar 30 14:19:52.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:19:52.415: INFO: namespace emptydir-3752 deletion completed in 6.093572247s • [SLOW TEST:10.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:19:52.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:19:52.473: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:19:53.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-752" for this suite. Mar 30 14:19:59.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:19:59.674: INFO: namespace custom-resource-definition-752 deletion completed in 6.112356514s • [SLOW TEST:7.258 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:19:59.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 30 14:19:59.727: INFO: Waiting up to 5m0s for pod "pod-3b5a584f-babe-43e6-a120-b4352aa4ce88" in namespace "emptydir-4045" to be "success or failure" Mar 30 14:19:59.738: INFO: Pod "pod-3b5a584f-babe-43e6-a120-b4352aa4ce88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.767032ms Mar 30 14:20:01.742: INFO: Pod "pod-3b5a584f-babe-43e6-a120-b4352aa4ce88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014238772s Mar 30 14:20:03.745: INFO: Pod "pod-3b5a584f-babe-43e6-a120-b4352aa4ce88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017815102s STEP: Saw pod success Mar 30 14:20:03.745: INFO: Pod "pod-3b5a584f-babe-43e6-a120-b4352aa4ce88" satisfied condition "success or failure" Mar 30 14:20:03.748: INFO: Trying to get logs from node iruya-worker2 pod pod-3b5a584f-babe-43e6-a120-b4352aa4ce88 container test-container: STEP: delete the pod Mar 30 14:20:03.788: INFO: Waiting for pod pod-3b5a584f-babe-43e6-a120-b4352aa4ce88 to disappear Mar 30 14:20:03.816: INFO: Pod pod-3b5a584f-babe-43e6-a120-b4352aa4ce88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:20:03.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4045" for this suite. Mar 30 14:20:09.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:20:09.919: INFO: namespace emptydir-4045 deletion completed in 6.099455737s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:20:09.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 30 14:20:14.519: INFO: Successfully updated pod "labelsupdate69e6650d-b07b-4709-a002-40d51ab0b06e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:20:16.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6134" for this suite. Mar 30 14:20:38.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:20:38.666: INFO: namespace projected-6134 deletion completed in 22.128049069s • [SLOW TEST:28.746 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:20:38.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:20:38.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88" in namespace "projected-7382" to be "success or failure" Mar 30 14:20:38.752: INFO: Pod "downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88": Phase="Pending", Reason="", readiness=false. Elapsed: 11.095012ms Mar 30 14:20:40.756: INFO: Pod "downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015384784s Mar 30 14:20:42.760: INFO: Pod "downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01938057s STEP: Saw pod success Mar 30 14:20:42.760: INFO: Pod "downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88" satisfied condition "success or failure" Mar 30 14:20:42.763: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88 container client-container: STEP: delete the pod Mar 30 14:20:42.837: INFO: Waiting for pod downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88 to disappear Mar 30 14:20:42.847: INFO: Pod downwardapi-volume-c0c11e8e-e8f7-4107-a778-04b9659b8f88 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:20:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7382" for this suite. Mar 30 14:20:48.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:20:48.940: INFO: namespace projected-7382 deletion completed in 6.089896436s • [SLOW TEST:10.274 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:20:48.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:21:20.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6112" for this suite. Mar 30 14:21:26.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:21:26.422: INFO: namespace container-runtime-6112 deletion completed in 6.100354904s • [SLOW TEST:37.481 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:21:26.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9672 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 14:21:26.466: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 14:21:50.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.174:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9672 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 14:21:50.618: INFO: >>> kubeConfig: /root/.kube/config I0330 14:21:50.658690 6 log.go:172] (0xc0023cc6e0) (0xc000e3a000) Create stream I0330 14:21:50.658720 6 log.go:172] (0xc0023cc6e0) (0xc000e3a000) Stream added, broadcasting: 1 I0330 14:21:50.661089 6 log.go:172] (0xc0023cc6e0) Reply frame received for 1 I0330 14:21:50.661231 6 log.go:172] (0xc0023cc6e0) (0xc0000fe140) Create stream I0330 14:21:50.661242 6 log.go:172] (0xc0023cc6e0) (0xc0000fe140) Stream added, broadcasting: 3 I0330 14:21:50.662253 6 log.go:172] (0xc0023cc6e0) Reply frame received for 3 I0330 14:21:50.662292 6 log.go:172] (0xc0023cc6e0) (0xc0014185a0) Create stream I0330 14:21:50.662305 6 log.go:172] (0xc0023cc6e0) (0xc0014185a0) Stream added, broadcasting: 5 I0330 14:21:50.663255 6 log.go:172] (0xc0023cc6e0) Reply frame received for 5 I0330 14:21:50.754568 6 log.go:172] (0xc0023cc6e0) Data frame received for 3 I0330 14:21:50.754607 6 log.go:172] (0xc0000fe140) (3) Data frame handling I0330 14:21:50.754624 6 log.go:172] (0xc0000fe140) (3) Data frame sent I0330 14:21:50.754634 6 log.go:172] (0xc0023cc6e0) Data frame received for 3 I0330 14:21:50.754646 6 log.go:172] (0xc0000fe140) (3) Data frame handling I0330 14:21:50.754886 6 log.go:172] (0xc0023cc6e0) Data frame received for 5 I0330 14:21:50.754909 6 log.go:172] (0xc0014185a0) (5) Data frame handling I0330 14:21:50.756094 6 log.go:172] (0xc0023cc6e0) Data frame received for 1 I0330 14:21:50.756112 6 log.go:172] (0xc000e3a000) (1) Data frame handling I0330 14:21:50.756127 6 log.go:172] (0xc000e3a000) (1) Data frame sent I0330 14:21:50.756147 6 log.go:172] (0xc0023cc6e0) (0xc000e3a000) Stream removed, broadcasting: 1 I0330 14:21:50.756159 6 log.go:172] (0xc0023cc6e0) Go away received I0330 14:21:50.756283 6 log.go:172] (0xc0023cc6e0) (0xc000e3a000) Stream removed, broadcasting: 1 I0330 14:21:50.756337 6 log.go:172] (0xc0023cc6e0) (0xc0000fe140) Stream removed, broadcasting: 3 I0330 14:21:50.756353 6 log.go:172] (0xc0023cc6e0) (0xc0014185a0) Stream removed, broadcasting: 5 Mar 30 14:21:50.756: INFO: Found all expected endpoints: [netserver-0] Mar 30 14:21:50.759: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.168:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9672 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 14:21:50.759: INFO: >>> kubeConfig: /root/.kube/config I0330 14:21:50.788197 6 log.go:172] (0xc00178cc60) (0xc001418820) Create stream I0330 14:21:50.788220 6 log.go:172] (0xc00178cc60) (0xc001418820) Stream added, broadcasting: 1 I0330 14:21:50.789901 6 log.go:172] (0xc00178cc60) Reply frame received for 1 I0330 14:21:50.789951 6 log.go:172] (0xc00178cc60) (0xc0000feaa0) Create stream I0330 14:21:50.789970 6 log.go:172] (0xc00178cc60) (0xc0000feaa0) Stream added, broadcasting: 3 I0330 14:21:50.790785 6 log.go:172] (0xc00178cc60) Reply frame received for 3 I0330 14:21:50.790817 6 log.go:172] (0xc00178cc60) (0xc000e3a140) Create stream I0330 14:21:50.790828 6 log.go:172] (0xc00178cc60) (0xc000e3a140) Stream added, broadcasting: 5 I0330 14:21:50.791596 6 log.go:172] (0xc00178cc60) Reply frame received for 5 I0330 14:21:50.856801 6 log.go:172] (0xc00178cc60) Data frame received for 5 I0330 14:21:50.856855 6 log.go:172] (0xc000e3a140) (5) Data frame handling I0330 14:21:50.856891 6 log.go:172] (0xc00178cc60) Data frame received for 3 I0330 14:21:50.856910 6 log.go:172] (0xc0000feaa0) (3) Data frame handling I0330 14:21:50.856943 6 log.go:172] (0xc0000feaa0) (3) Data frame sent I0330 14:21:50.856978 6 log.go:172] (0xc00178cc60) Data frame received for 3 I0330 14:21:50.857005 6 log.go:172] (0xc0000feaa0) (3) Data frame handling I0330 14:21:50.858724 6 log.go:172] (0xc00178cc60) Data frame received for 1 I0330 14:21:50.858763 6 log.go:172] (0xc001418820) (1) Data frame handling I0330 14:21:50.858791 6 log.go:172] (0xc001418820) (1) Data frame sent I0330 14:21:50.858836 6 log.go:172] (0xc00178cc60) (0xc001418820) Stream removed, broadcasting: 1 I0330 14:21:50.858854 6 log.go:172] (0xc00178cc60) Go away received I0330 14:21:50.859025 6 log.go:172] (0xc00178cc60) (0xc001418820) Stream removed, broadcasting: 1 I0330 14:21:50.859062 6 log.go:172] (0xc00178cc60) (0xc0000feaa0) Stream removed, broadcasting: 3 I0330 14:21:50.859123 6 log.go:172] (0xc00178cc60) (0xc000e3a140) Stream removed, broadcasting: 5 Mar 30 14:21:50.859: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:21:50.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9672" for this suite. Mar 30 14:22:12.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:22:12.957: INFO: namespace pod-network-test-9672 deletion completed in 22.093844819s • [SLOW TEST:46.534 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:22:12.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:22:17.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-301" for this suite. Mar 30 14:23:07.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:23:07.150: INFO: namespace kubelet-test-301 deletion completed in 50.107796728s • [SLOW TEST:54.193 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:23:07.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d3ec45f0-f3d6-41c4-8a0e-3010cbecd30b STEP: Creating a pod to test consume secrets Mar 30 14:23:07.296: INFO: Waiting up to 5m0s for pod "pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca" in namespace "secrets-8258" to be "success or failure" Mar 30 14:23:07.310: INFO: Pod "pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.228061ms Mar 30 14:23:09.314: INFO: Pod "pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017652379s Mar 30 14:23:11.318: INFO: Pod "pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022021169s STEP: Saw pod success Mar 30 14:23:11.318: INFO: Pod "pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca" satisfied condition "success or failure" Mar 30 14:23:11.321: INFO: Trying to get logs from node iruya-worker pod pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca container secret-volume-test: STEP: delete the pod Mar 30 14:23:11.343: INFO: Waiting for pod pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca to disappear Mar 30 14:23:11.358: INFO: Pod pod-secrets-e986b107-a4a8-48cd-9cd9-20beb47ce0ca no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:23:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8258" for this suite. Mar 30 14:23:17.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:23:17.447: INFO: namespace secrets-8258 deletion completed in 6.086006429s STEP: Destroying namespace "secret-namespace-3683" for this suite. Mar 30 14:23:23.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:23:23.541: INFO: namespace secret-namespace-3683 deletion completed in 6.093280253s • [SLOW TEST:16.390 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:23:23.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-64cd953e-cd5c-4bbf-a083-86c27faa5f8c STEP: Creating a pod to test consume configMaps Mar 30 14:23:23.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755" in namespace "configmap-5544" to be "success or failure" Mar 30 14:23:23.615: INFO: Pod "pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755": Phase="Pending", Reason="", readiness=false. Elapsed: 9.744943ms Mar 30 14:23:25.640: INFO: Pod "pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034589282s Mar 30 14:23:27.644: INFO: Pod "pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038845708s STEP: Saw pod success Mar 30 14:23:27.644: INFO: Pod "pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755" satisfied condition "success or failure" Mar 30 14:23:27.648: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755 container configmap-volume-test: STEP: delete the pod Mar 30 14:23:27.665: INFO: Waiting for pod pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755 to disappear Mar 30 14:23:27.688: INFO: Pod pod-configmaps-ba725c87-69db-4ac5-8de0-2fc3bcd18755 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:23:27.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5544" for this suite. Mar 30 14:23:33.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:23:33.786: INFO: namespace configmap-5544 deletion completed in 6.093956527s • [SLOW TEST:10.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:23:33.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 30 14:23:33.836: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 30 14:23:33.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:36.494: INFO: stderr: "" Mar 30 14:23:36.494: INFO: stdout: "service/redis-slave created\n" Mar 30 14:23:36.494: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 30 14:23:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:36.814: INFO: stderr: "" Mar 30 14:23:36.814: INFO: stdout: "service/redis-master created\n" Mar 30 14:23:36.814: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 30 14:23:36.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:37.106: INFO: stderr: "" Mar 30 14:23:37.106: INFO: stdout: "service/frontend created\n" Mar 30 14:23:37.106: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 30 14:23:37.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:37.365: INFO: stderr: "" Mar 30 14:23:37.365: INFO: stdout: "deployment.apps/frontend created\n" Mar 30 14:23:37.365: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 30 14:23:37.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:37.667: INFO: stderr: "" Mar 30 14:23:37.667: INFO: stdout: "deployment.apps/redis-master created\n" Mar 30 14:23:37.667: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 30 14:23:37.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1102' Mar 30 14:23:37.939: INFO: stderr: "" Mar 30 14:23:37.939: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 30 14:23:37.939: INFO: Waiting for all frontend pods to be Running. Mar 30 14:23:47.990: INFO: Waiting for frontend to serve content. Mar 30 14:23:48.009: INFO: Trying to add a new entry to the guestbook. Mar 30 14:23:48.026: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 30 14:23:48.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.185: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.185: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 30 14:23:48.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.316: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 30 14:23:48.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.429: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.429: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 30 14:23:48.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.524: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.524: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 30 14:23:48.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.627: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.627: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 30 14:23:48.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1102' Mar 30 14:23:48.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 14:23:48.756: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:23:48.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1102" for this suite. Mar 30 14:24:34.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:24:34.954: INFO: namespace kubectl-1102 deletion completed in 46.153941572s • [SLOW TEST:61.168 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:24:34.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-400ebd44-59b3-4756-bb1c-195fc1f8a64d STEP: Creating a pod to test consume configMaps Mar 30 14:24:35.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0" in namespace "configmap-5198" to be "success or failure" Mar 30 14:24:35.021: INFO: Pod "pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417552ms Mar 30 14:24:37.025: INFO: Pod "pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008583383s Mar 30 14:24:39.029: INFO: Pod "pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012609464s STEP: Saw pod success Mar 30 14:24:39.029: INFO: Pod "pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0" satisfied condition "success or failure" Mar 30 14:24:39.032: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0 container configmap-volume-test: STEP: delete the pod Mar 30 14:24:39.051: INFO: Waiting for pod pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0 to disappear Mar 30 14:24:39.055: INFO: Pod pod-configmaps-0a550ba7-e7dc-42b1-b8a4-cba082dfb7f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:24:39.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5198" for this suite. Mar 30 14:24:45.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:24:45.148: INFO: namespace configmap-5198 deletion completed in 6.088959685s • [SLOW TEST:10.193 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:24:45.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:24:45.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f" in namespace "downward-api-8916" to be "success or failure" Mar 30 14:24:45.246: INFO: Pod "downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.118868ms Mar 30 14:24:47.251: INFO: Pod "downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040721504s Mar 30 14:24:49.254: INFO: Pod "downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044288562s STEP: Saw pod success Mar 30 14:24:49.254: INFO: Pod "downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f" satisfied condition "success or failure" Mar 30 14:24:49.257: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f container client-container: STEP: delete the pod Mar 30 14:24:49.298: INFO: Waiting for pod downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f to disappear Mar 30 14:24:49.307: INFO: Pod downwardapi-volume-16ec9540-6054-467e-9d76-3c9a6e55bc3f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:24:49.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8916" for this suite. Mar 30 14:24:55.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:24:55.395: INFO: namespace downward-api-8916 deletion completed in 6.08581287s • [SLOW TEST:10.247 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:24:55.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f09b5089-a025-4c10-a817-62a44a9f05de STEP: Creating a pod to test consume secrets Mar 30 14:24:55.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6" in namespace "projected-6215" to be "success or failure" Mar 30 14:24:55.502: INFO: Pod "pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.682717ms Mar 30 14:24:57.506: INFO: Pod "pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023553981s Mar 30 14:24:59.514: INFO: Pod "pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031811825s STEP: Saw pod success Mar 30 14:24:59.514: INFO: Pod "pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6" satisfied condition "success or failure" Mar 30 14:24:59.517: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6 container projected-secret-volume-test: STEP: delete the pod Mar 30 14:24:59.598: INFO: Waiting for pod pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6 to disappear Mar 30 14:24:59.671: INFO: Pod pod-projected-secrets-582e152a-f0fc-4801-abca-a975d37808a6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:24:59.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6215" for this suite. Mar 30 14:25:05.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:25:05.796: INFO: namespace projected-6215 deletion completed in 6.120298361s • [SLOW TEST:10.400 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:25:05.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 in namespace container-probe-8246 Mar 30 14:25:09.916: INFO: Started pod liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 in namespace container-probe-8246 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 14:25:09.918: INFO: Initial restart count of pod liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is 0 Mar 30 14:25:28.004: INFO: Restart count of pod container-probe-8246/liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is now 1 (18.085804407s elapsed) Mar 30 14:25:50.050: INFO: Restart count of pod container-probe-8246/liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is now 2 (40.132066849s elapsed) Mar 30 14:26:08.091: INFO: Restart count of pod container-probe-8246/liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is now 3 (58.172549846s elapsed) Mar 30 14:26:28.133: INFO: Restart count of pod container-probe-8246/liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is now 4 (1m18.215248265s elapsed) Mar 30 14:27:30.267: INFO: Restart count of pod container-probe-8246/liveness-66fdc46b-8710-494b-b8e9-c1055b889eb6 is now 5 (2m20.348752988s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:27:30.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8246" for this suite. Mar 30 14:27:36.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:27:36.394: INFO: namespace container-probe-8246 deletion completed in 6.093585664s • [SLOW TEST:150.598 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:27:36.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-69abf09b-53bc-404c-a465-53cf0e2aacea STEP: Creating a pod to test consume configMaps Mar 30 14:27:36.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f" in namespace "projected-6423" to be "success or failure" Mar 30 14:27:36.486: INFO: Pod "pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.615158ms Mar 30 14:27:38.490: INFO: Pod "pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012034467s Mar 30 14:27:40.494: INFO: Pod "pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016282341s STEP: Saw pod success Mar 30 14:27:40.494: INFO: Pod "pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f" satisfied condition "success or failure" Mar 30 14:27:40.498: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f container projected-configmap-volume-test: STEP: delete the pod Mar 30 14:27:40.530: INFO: Waiting for pod pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f to disappear Mar 30 14:27:40.542: INFO: Pod pod-projected-configmaps-093f7204-4672-4657-98ab-4a325c01e43f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:27:40.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6423" for this suite. Mar 30 14:27:46.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:27:46.653: INFO: namespace projected-6423 deletion completed in 6.087021315s • [SLOW TEST:10.259 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:27:46.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 30 14:27:46.730: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 14:27:46.738: INFO: Waiting for terminating namespaces to be deleted... Mar 30 14:27:46.741: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 30 14:27:46.746: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.746: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 14:27:46.746: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.746: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 14:27:46.746: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 30 14:27:46.754: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.754: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 14:27:46.754: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.754: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 14:27:46.754: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.754: INFO: Container coredns ready: true, restart count 0 Mar 30 14:27:46.754: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 30 14:27:46.754: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 30 14:27:46.806: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 30 14:27:46.806: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 30 14:27:46.806: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 30 14:27:46.806: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 30 14:27:46.806: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 30 14:27:46.806: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95.16011b370287115a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1830/filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95.16011b3783ad753e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95.16011b37a20d4262], Reason = [Created], Message = [Created container filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95.16011b37af338236], Reason = [Started], Message = [Started container filler-pod-5c0563bd-6431-46be-8e0e-e7b1f1031b95] STEP: Considering event: Type = [Normal], Name = [filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705.16011b3701192137], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1830/filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705.16011b374bcbd4f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705.16011b3784eacbe2], Reason = [Created], Message = [Created container filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705] STEP: Considering event: Type = [Normal], Name = [filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705.16011b3797f523bf], Reason = [Started], Message = [Started container filler-pod-ef0861ed-9bc7-4d95-8616-14112824b705] STEP: Considering event: Type = [Warning], Name = [additional-pod.16011b37f2034bc3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:27:51.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1830" for this suite. Mar 30 14:27:57.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:27:58.079: INFO: namespace sched-pred-1830 deletion completed in 6.100714177s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.426 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:27:58.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 30 14:27:58.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8" in namespace "projected-1704" to be "success or failure" Mar 30 14:27:58.150: INFO: Pod "downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.790375ms Mar 30 14:28:00.154: INFO: Pod "downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00720237s Mar 30 14:28:02.158: INFO: Pod "downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011527594s STEP: Saw pod success Mar 30 14:28:02.158: INFO: Pod "downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8" satisfied condition "success or failure" Mar 30 14:28:02.161: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8 container client-container: STEP: delete the pod Mar 30 14:28:02.185: INFO: Waiting for pod downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8 to disappear Mar 30 14:28:02.198: INFO: Pod downwardapi-volume-bdcc5551-19aa-460d-b27c-1d60fde5c1a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:28:02.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1704" for this suite. Mar 30 14:28:08.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:28:08.292: INFO: namespace projected-1704 deletion completed in 6.089934464s • [SLOW TEST:10.212 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:28:08.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 30 14:28:08.372: INFO: Waiting up to 5m0s for pod "client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956" in namespace "containers-7174" to be "success or failure" Mar 30 14:28:08.376: INFO: Pod "client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956": Phase="Pending", Reason="", readiness=false. Elapsed: 3.829557ms Mar 30 14:28:10.379: INFO: Pod "client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007459341s Mar 30 14:28:12.384: INFO: Pod "client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011668829s STEP: Saw pod success Mar 30 14:28:12.384: INFO: Pod "client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956" satisfied condition "success or failure" Mar 30 14:28:12.387: INFO: Trying to get logs from node iruya-worker pod client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956 container test-container: STEP: delete the pod Mar 30 14:28:12.429: INFO: Waiting for pod client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956 to disappear Mar 30 14:28:12.447: INFO: Pod client-containers-11d0ffe9-8412-4eb5-af94-01a2704a6956 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:28:12.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7174" for this suite. Mar 30 14:28:18.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:28:18.546: INFO: namespace containers-7174 deletion completed in 6.095245761s • [SLOW TEST:10.254 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:28:18.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0330 14:28:58.770852 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 14:28:58.770: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:28:58.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2652" for this suite. Mar 30 14:29:08.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:08.865: INFO: namespace gc-2652 deletion completed in 10.090329723s • [SLOW TEST:50.318 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:08.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:29:12.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3069" for this suite. Mar 30 14:29:18.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:19.081: INFO: namespace kubelet-test-3069 deletion completed in 6.136996116s • [SLOW TEST:10.216 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:19.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-85d69b22-1c57-4290-9cd3-a4937b909d8c STEP: Creating a pod to test consume configMaps Mar 30 14:29:19.165: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978" in namespace "projected-3126" to be "success or failure" Mar 30 14:29:19.168: INFO: Pod "pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83261ms Mar 30 14:29:21.178: INFO: Pod "pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012535033s Mar 30 14:29:23.181: INFO: Pod "pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016125077s STEP: Saw pod success Mar 30 14:29:23.181: INFO: Pod "pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978" satisfied condition "success or failure" Mar 30 14:29:23.184: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978 container projected-configmap-volume-test: STEP: delete the pod Mar 30 14:29:23.199: INFO: Waiting for pod pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978 to disappear Mar 30 14:29:23.203: INFO: Pod pod-projected-configmaps-a2eb59ca-a576-453b-8ca8-e6eb2c1a7978 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:29:23.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3126" for this suite. Mar 30 14:29:29.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:29.331: INFO: namespace projected-3126 deletion completed in 6.125260967s • [SLOW TEST:10.250 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:29.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:29:29.406: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.940484ms) Mar 30 14:29:29.409: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.658708ms) Mar 30 14:29:29.413: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.143063ms) Mar 30 14:29:29.416: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.719283ms) Mar 30 14:29:29.420: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.314687ms) Mar 30 14:29:29.422: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.753895ms) Mar 30 14:29:29.425: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.361987ms) Mar 30 14:29:29.427: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.183321ms) Mar 30 14:29:29.429: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.455102ms) Mar 30 14:29:29.432: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.539473ms) Mar 30 14:29:29.435: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.532493ms) Mar 30 14:29:29.437: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.818436ms) Mar 30 14:29:29.440: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.388526ms) Mar 30 14:29:29.443: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.854635ms) Mar 30 14:29:29.446: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.719624ms) Mar 30 14:29:29.449: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.015193ms) Mar 30 14:29:29.452: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.26676ms) Mar 30 14:29:29.455: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.386453ms) Mar 30 14:29:29.459: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.349215ms) Mar 30 14:29:29.462: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.381632ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:29:29.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7912" for this suite. Mar 30 14:29:35.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:35.556: INFO: namespace proxy-7912 deletion completed in 6.090526328s • [SLOW TEST:6.224 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:35.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-2de9fa6c-de26-4a2d-949b-6b591f270049 STEP: Creating secret with name secret-projected-all-test-volume-65bdf955-8f09-4697-bc96-a311ec5a73ff STEP: Creating a pod to test Check all projections for projected volume plugin Mar 30 14:29:35.628: INFO: Waiting up to 5m0s for pod "projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54" in namespace "projected-688" to be "success or failure" Mar 30 14:29:35.632: INFO: Pod "projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1308ms Mar 30 14:29:37.636: INFO: Pod "projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007845221s Mar 30 14:29:39.639: INFO: Pod "projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01137479s STEP: Saw pod success Mar 30 14:29:39.639: INFO: Pod "projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54" satisfied condition "success or failure" Mar 30 14:29:39.642: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54 container projected-all-volume-test: STEP: delete the pod Mar 30 14:29:39.676: INFO: Waiting for pod projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54 to disappear Mar 30 14:29:39.679: INFO: Pod projected-volume-c781bb0f-da20-4ef4-8dff-2ae9cd158a54 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:29:39.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-688" for this suite. Mar 30 14:29:45.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:45.768: INFO: namespace projected-688 deletion completed in 6.085721989s • [SLOW TEST:10.211 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:45.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 30 14:29:45.906: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 30 14:29:50.911: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:29:51.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7078" for this suite. Mar 30 14:29:57.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:29:58.019: INFO: namespace replication-controller-7078 deletion completed in 6.086566993s • [SLOW TEST:12.251 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:29:58.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6583 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 14:29:58.097: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 14:30:22.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.188:8080/dial?request=hostName&protocol=http&host=10.244.2.187&port=8080&tries=1'] Namespace:pod-network-test-6583 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 14:30:22.231: INFO: >>> kubeConfig: /root/.kube/config I0330 14:30:22.256924 6 log.go:172] (0xc001c04b00) (0xc002e87900) Create stream I0330 14:30:22.256953 6 log.go:172] (0xc001c04b00) (0xc002e87900) Stream added, broadcasting: 1 I0330 14:30:22.258882 6 log.go:172] (0xc001c04b00) Reply frame received for 1 I0330 14:30:22.258921 6 log.go:172] (0xc001c04b00) (0xc000e3b5e0) Create stream I0330 14:30:22.258932 6 log.go:172] (0xc001c04b00) (0xc000e3b5e0) Stream added, broadcasting: 3 I0330 14:30:22.260093 6 log.go:172] (0xc001c04b00) Reply frame received for 3 I0330 14:30:22.260145 6 log.go:172] (0xc001c04b00) (0xc002e879a0) Create stream I0330 14:30:22.260157 6 log.go:172] (0xc001c04b00) (0xc002e879a0) Stream added, broadcasting: 5 I0330 14:30:22.261056 6 log.go:172] (0xc001c04b00) Reply frame received for 5 I0330 14:30:22.346532 6 log.go:172] (0xc001c04b00) Data frame received for 3 I0330 14:30:22.346561 6 log.go:172] (0xc000e3b5e0) (3) Data frame handling I0330 14:30:22.346579 6 log.go:172] (0xc000e3b5e0) (3) Data frame sent I0330 14:30:22.347986 6 log.go:172] (0xc001c04b00) Data frame received for 5 I0330 14:30:22.347998 6 log.go:172] (0xc002e879a0) (5) Data frame handling I0330 14:30:22.348011 6 log.go:172] (0xc001c04b00) Data frame received for 3 I0330 14:30:22.348038 6 log.go:172] (0xc000e3b5e0) (3) Data frame handling I0330 14:30:22.349721 6 log.go:172] (0xc001c04b00) Data frame received for 1 I0330 14:30:22.349739 6 log.go:172] (0xc002e87900) (1) Data frame handling I0330 14:30:22.349749 6 log.go:172] (0xc002e87900) (1) Data frame sent I0330 14:30:22.349762 6 log.go:172] (0xc001c04b00) (0xc002e87900) Stream removed, broadcasting: 1 I0330 14:30:22.349780 6 log.go:172] (0xc001c04b00) Go away received I0330 14:30:22.349918 6 log.go:172] (0xc001c04b00) (0xc002e87900) Stream removed, broadcasting: 1 I0330 14:30:22.349940 6 log.go:172] (0xc001c04b00) (0xc000e3b5e0) Stream removed, broadcasting: 3 I0330 14:30:22.349953 6 log.go:172] (0xc001c04b00) (0xc002e879a0) Stream removed, broadcasting: 5 Mar 30 14:30:22.350: INFO: Waiting for endpoints: map[] Mar 30 14:30:22.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.188:8080/dial?request=hostName&protocol=http&host=10.244.1.191&port=8080&tries=1'] Namespace:pod-network-test-6583 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 14:30:22.353: INFO: >>> kubeConfig: /root/.kube/config I0330 14:30:22.380280 6 log.go:172] (0xc002922b00) (0xc0020f0c80) Create stream I0330 14:30:22.380315 6 log.go:172] (0xc002922b00) (0xc0020f0c80) Stream added, broadcasting: 1 I0330 14:30:22.382086 6 log.go:172] (0xc002922b00) Reply frame received for 1 I0330 14:30:22.382201 6 log.go:172] (0xc002922b00) (0xc0020f0d20) Create stream I0330 14:30:22.382218 6 log.go:172] (0xc002922b00) (0xc0020f0d20) Stream added, broadcasting: 3 I0330 14:30:22.383485 6 log.go:172] (0xc002922b00) Reply frame received for 3 I0330 14:30:22.383530 6 log.go:172] (0xc002922b00) (0xc002e87a40) Create stream I0330 14:30:22.383544 6 log.go:172] (0xc002922b00) (0xc002e87a40) Stream added, broadcasting: 5 I0330 14:30:22.384599 6 log.go:172] (0xc002922b00) Reply frame received for 5 I0330 14:30:22.439648 6 log.go:172] (0xc002922b00) Data frame received for 3 I0330 14:30:22.439698 6 log.go:172] (0xc0020f0d20) (3) Data frame handling I0330 14:30:22.439734 6 log.go:172] (0xc0020f0d20) (3) Data frame sent I0330 14:30:22.440182 6 log.go:172] (0xc002922b00) Data frame received for 3 I0330 14:30:22.440222 6 log.go:172] (0xc0020f0d20) (3) Data frame handling I0330 14:30:22.440441 6 log.go:172] (0xc002922b00) Data frame received for 5 I0330 14:30:22.440453 6 log.go:172] (0xc002e87a40) (5) Data frame handling I0330 14:30:22.442328 6 log.go:172] (0xc002922b00) Data frame received for 1 I0330 14:30:22.442357 6 log.go:172] (0xc0020f0c80) (1) Data frame handling I0330 14:30:22.442375 6 log.go:172] (0xc0020f0c80) (1) Data frame sent I0330 14:30:22.442393 6 log.go:172] (0xc002922b00) (0xc0020f0c80) Stream removed, broadcasting: 1 I0330 14:30:22.442417 6 log.go:172] (0xc002922b00) Go away received I0330 14:30:22.442507 6 log.go:172] (0xc002922b00) (0xc0020f0c80) Stream removed, broadcasting: 1 I0330 14:30:22.442538 6 log.go:172] (0xc002922b00) (0xc0020f0d20) Stream removed, broadcasting: 3 I0330 14:30:22.442559 6 log.go:172] (0xc002922b00) (0xc002e87a40) Stream removed, broadcasting: 5 Mar 30 14:30:22.442: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:30:22.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6583" for this suite. Mar 30 14:30:44.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:30:44.568: INFO: namespace pod-network-test-6583 deletion completed in 22.107448021s • [SLOW TEST:46.548 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:30:44.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3193.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3193.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3193.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3193.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3193.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3193.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 14:30:50.695: INFO: DNS probes using dns-3193/dns-test-2e9b21dd-0ea7-4c33-9851-f880249820bd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:30:50.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3193" for this suite. Mar 30 14:30:56.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:30:56.855: INFO: namespace dns-3193 deletion completed in 6.116600109s • [SLOW TEST:12.287 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:30:56.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:30:56.905: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 30 14:30:58.943: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:30:59.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4274" for this suite. Mar 30 14:31:06.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:06.195: INFO: namespace replication-controller-4274 deletion completed in 6.224270503s • [SLOW TEST:9.339 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:06.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-da08dfe0-60fa-4bf4-9a5b-a773e66f6cbe STEP: Creating a pod to test consume secrets Mar 30 14:31:06.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d" in namespace "projected-6771" to be "success or failure" Mar 30 14:31:06.325: INFO: Pod "pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616136ms Mar 30 14:31:08.328: INFO: Pod "pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007078757s Mar 30 14:31:10.333: INFO: Pod "pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011606249s STEP: Saw pod success Mar 30 14:31:10.333: INFO: Pod "pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d" satisfied condition "success or failure" Mar 30 14:31:10.336: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d container secret-volume-test: STEP: delete the pod Mar 30 14:31:10.356: INFO: Waiting for pod pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d to disappear Mar 30 14:31:10.361: INFO: Pod pod-projected-secrets-754dcc3f-2721-4bd4-9052-5fb015fc350d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:31:10.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6771" for this suite. Mar 30 14:31:16.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:16.449: INFO: namespace projected-6771 deletion completed in 6.084501847s • [SLOW TEST:10.254 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:16.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 30 14:31:16.513: INFO: Waiting up to 5m0s for pod "downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987" in namespace "downward-api-3751" to be "success or failure" Mar 30 14:31:16.516: INFO: Pod "downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987": Phase="Pending", Reason="", readiness=false. Elapsed: 3.144182ms Mar 30 14:31:18.521: INFO: Pod "downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007346398s Mar 30 14:31:20.525: INFO: Pod "downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011713076s STEP: Saw pod success Mar 30 14:31:20.525: INFO: Pod "downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987" satisfied condition "success or failure" Mar 30 14:31:20.528: INFO: Trying to get logs from node iruya-worker pod downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987 container dapi-container: STEP: delete the pod Mar 30 14:31:20.548: INFO: Waiting for pod downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987 to disappear Mar 30 14:31:20.552: INFO: Pod downward-api-7e2f658c-7764-4493-86a6-6f9b9b741987 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:31:20.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3751" for this suite. Mar 30 14:31:26.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:26.649: INFO: namespace downward-api-3751 deletion completed in 6.093579414s • [SLOW TEST:10.199 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:26.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 30 14:31:26.720: INFO: Waiting up to 5m0s for pod "var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3" in namespace "var-expansion-6250" to be "success or failure" Mar 30 14:31:26.724: INFO: Pod "var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.629548ms Mar 30 14:31:28.728: INFO: Pod "var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007400879s Mar 30 14:31:30.732: INFO: Pod "var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01131178s STEP: Saw pod success Mar 30 14:31:30.732: INFO: Pod "var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3" satisfied condition "success or failure" Mar 30 14:31:30.734: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3 container dapi-container: STEP: delete the pod Mar 30 14:31:30.750: INFO: Waiting for pod var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3 to disappear Mar 30 14:31:30.767: INFO: Pod var-expansion-f1a617db-0e78-4437-bd67-87befd9905b3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:31:30.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6250" for this suite. Mar 30 14:31:36.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:36.864: INFO: namespace var-expansion-6250 deletion completed in 6.09300948s • [SLOW TEST:10.214 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:36.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c58ff974-a45b-4f1c-a285-54210c89c1d4 STEP: Creating a pod to test consume secrets Mar 30 14:31:36.942: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e" in namespace "projected-6567" to be "success or failure" Mar 30 14:31:36.961: INFO: Pod "pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.145016ms Mar 30 14:31:38.965: INFO: Pod "pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023400352s Mar 30 14:31:40.970: INFO: Pod "pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028149852s STEP: Saw pod success Mar 30 14:31:40.970: INFO: Pod "pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e" satisfied condition "success or failure" Mar 30 14:31:40.973: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e container projected-secret-volume-test: STEP: delete the pod Mar 30 14:31:40.989: INFO: Waiting for pod pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e to disappear Mar 30 14:31:41.008: INFO: Pod pod-projected-secrets-6b1d1c0e-6479-4f58-8db8-d68c2fc86b6e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:31:41.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6567" for this suite. Mar 30 14:31:47.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:47.108: INFO: namespace projected-6567 deletion completed in 6.096831647s • [SLOW TEST:10.245 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:47.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 30 14:31:47.172: INFO: Waiting up to 5m0s for pod "pod-22f4c2e3-748a-4675-896a-79fc8c9cc748" in namespace "emptydir-5858" to be "success or failure" Mar 30 14:31:47.176: INFO: Pod "pod-22f4c2e3-748a-4675-896a-79fc8c9cc748": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676301ms Mar 30 14:31:49.288: INFO: Pod "pod-22f4c2e3-748a-4675-896a-79fc8c9cc748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115908336s Mar 30 14:31:51.292: INFO: Pod "pod-22f4c2e3-748a-4675-896a-79fc8c9cc748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119886814s STEP: Saw pod success Mar 30 14:31:51.292: INFO: Pod "pod-22f4c2e3-748a-4675-896a-79fc8c9cc748" satisfied condition "success or failure" Mar 30 14:31:51.295: INFO: Trying to get logs from node iruya-worker pod pod-22f4c2e3-748a-4675-896a-79fc8c9cc748 container test-container: STEP: delete the pod Mar 30 14:31:51.325: INFO: Waiting for pod pod-22f4c2e3-748a-4675-896a-79fc8c9cc748 to disappear Mar 30 14:31:51.331: INFO: Pod pod-22f4c2e3-748a-4675-896a-79fc8c9cc748 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:31:51.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5858" for this suite. Mar 30 14:31:57.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:31:57.447: INFO: namespace emptydir-5858 deletion completed in 6.111643341s • [SLOW TEST:10.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:31:57.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0330 14:32:08.350828 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 14:32:08.350: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:32:08.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5763" for this suite. Mar 30 14:32:16.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:32:16.447: INFO: namespace gc-5763 deletion completed in 8.092201815s • [SLOW TEST:18.999 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:32:16.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:32:21.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9221" for this suite. Mar 30 14:32:28.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:32:28.178: INFO: namespace watch-9221 deletion completed in 6.173519664s • [SLOW TEST:11.731 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:32:28.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 30 14:32:28.282: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1071,SelfLink:/api/v1/namespaces/watch-1071/configmaps/e2e-watch-test-resource-version,UID:e536ddf1-3bf8-48d0-a4a4-a3698df59bf9,ResourceVersion:2692395,Generation:0,CreationTimestamp:2020-03-30 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 14:32:28.282: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1071,SelfLink:/api/v1/namespaces/watch-1071/configmaps/e2e-watch-test-resource-version,UID:e536ddf1-3bf8-48d0-a4a4-a3698df59bf9,ResourceVersion:2692396,Generation:0,CreationTimestamp:2020-03-30 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:32:28.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1071" for this suite. Mar 30 14:32:34.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:32:34.411: INFO: namespace watch-1071 deletion completed in 6.120417278s • [SLOW TEST:6.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:32:34.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 30 14:32:34.494: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692417,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 14:32:34.495: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692417,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 30 14:32:44.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692437,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 30 14:32:44.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692437,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 30 14:32:54.513: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692458,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 14:32:54.513: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692458,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 30 14:33:04.520: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692478,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 14:33:04.520: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-a,UID:988834e5-5377-4e02-a3fd-62ac8cf4e20b,ResourceVersion:2692478,Generation:0,CreationTimestamp:2020-03-30 14:32:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 30 14:33:14.527: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-b,UID:c1ee34db-2bee-4b25-95be-fed6921d2102,ResourceVersion:2692499,Generation:0,CreationTimestamp:2020-03-30 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 14:33:14.527: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-b,UID:c1ee34db-2bee-4b25-95be-fed6921d2102,ResourceVersion:2692499,Generation:0,CreationTimestamp:2020-03-30 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 30 14:33:24.549: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-b,UID:c1ee34db-2bee-4b25-95be-fed6921d2102,ResourceVersion:2692519,Generation:0,CreationTimestamp:2020-03-30 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 14:33:24.549: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4497,SelfLink:/api/v1/namespaces/watch-4497/configmaps/e2e-watch-test-configmap-b,UID:c1ee34db-2bee-4b25-95be-fed6921d2102,ResourceVersion:2692519,Generation:0,CreationTimestamp:2020-03-30 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:33:34.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4497" for this suite. Mar 30 14:33:40.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:33:40.660: INFO: namespace watch-4497 deletion completed in 6.106624984s • [SLOW TEST:66.248 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:33:40.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 30 14:33:40.752: INFO: Waiting up to 5m0s for pod "pod-f651078f-4f66-4403-b342-8eae9b5b3665" in namespace "emptydir-6801" to be "success or failure" Mar 30 14:33:40.758: INFO: Pod "pod-f651078f-4f66-4403-b342-8eae9b5b3665": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576722ms Mar 30 14:33:42.763: INFO: Pod "pod-f651078f-4f66-4403-b342-8eae9b5b3665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0109948s Mar 30 14:33:44.767: INFO: Pod "pod-f651078f-4f66-4403-b342-8eae9b5b3665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014920158s STEP: Saw pod success Mar 30 14:33:44.767: INFO: Pod "pod-f651078f-4f66-4403-b342-8eae9b5b3665" satisfied condition "success or failure" Mar 30 14:33:44.769: INFO: Trying to get logs from node iruya-worker pod pod-f651078f-4f66-4403-b342-8eae9b5b3665 container test-container: STEP: delete the pod Mar 30 14:33:44.800: INFO: Waiting for pod pod-f651078f-4f66-4403-b342-8eae9b5b3665 to disappear Mar 30 14:33:44.807: INFO: Pod pod-f651078f-4f66-4403-b342-8eae9b5b3665 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:33:44.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6801" for this suite. Mar 30 14:33:50.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:33:50.901: INFO: namespace emptydir-6801 deletion completed in 6.091395297s • [SLOW TEST:10.240 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:33:50.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-xclw STEP: Creating a pod to test atomic-volume-subpath Mar 30 14:33:50.970: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xclw" in namespace "subpath-4444" to be "success or failure" Mar 30 14:33:50.982: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.251334ms Mar 30 14:33:53.032: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061898995s Mar 30 14:33:55.037: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 4.066838751s Mar 30 14:33:57.041: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 6.071086824s Mar 30 14:33:59.045: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 8.075305831s Mar 30 14:34:01.050: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 10.079994243s Mar 30 14:34:03.054: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 12.084306913s Mar 30 14:34:05.059: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 14.088753496s Mar 30 14:34:07.063: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 16.092978496s Mar 30 14:34:09.068: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 18.097494475s Mar 30 14:34:11.072: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 20.101774175s Mar 30 14:34:13.076: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Running", Reason="", readiness=true. Elapsed: 22.106098015s Mar 30 14:34:15.081: INFO: Pod "pod-subpath-test-secret-xclw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.110662297s STEP: Saw pod success Mar 30 14:34:15.081: INFO: Pod "pod-subpath-test-secret-xclw" satisfied condition "success or failure" Mar 30 14:34:15.084: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-xclw container test-container-subpath-secret-xclw: STEP: delete the pod Mar 30 14:34:15.126: INFO: Waiting for pod pod-subpath-test-secret-xclw to disappear Mar 30 14:34:15.141: INFO: Pod pod-subpath-test-secret-xclw no longer exists STEP: Deleting pod pod-subpath-test-secret-xclw Mar 30 14:34:15.141: INFO: Deleting pod "pod-subpath-test-secret-xclw" in namespace "subpath-4444" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:34:15.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4444" for this suite. Mar 30 14:34:21.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:34:21.255: INFO: namespace subpath-4444 deletion completed in 6.10893554s • [SLOW TEST:30.354 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:34:21.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-f26339c7-4ea9-4422-9596-362c9a8126d6 STEP: Creating secret with name s-test-opt-upd-6c0b4c58-832b-40e4-a9ac-b8cbe84eede6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f26339c7-4ea9-4422-9596-362c9a8126d6 STEP: Updating secret s-test-opt-upd-6c0b4c58-832b-40e4-a9ac-b8cbe84eede6 STEP: Creating secret with name s-test-opt-create-5ad5e9f7-6db3-47b0-8389-6ee169657ef5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:34:31.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-136" for this suite. Mar 30 14:34:47.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:34:47.541: INFO: namespace secrets-136 deletion completed in 16.090708701s • [SLOW TEST:26.286 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:34:47.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 30 14:34:47.656: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"17e7dc16-c89e-426a-af11-7734a13c817f", Controller:(*bool)(0xc002613012), BlockOwnerDeletion:(*bool)(0xc002613013)}} Mar 30 14:34:47.669: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fff5016a-be4b-4cc1-8990-f7e64c4c9de3", Controller:(*bool)(0xc0026f46b2), BlockOwnerDeletion:(*bool)(0xc0026f46b3)}} Mar 30 14:34:47.700: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ee16bac4-310c-491c-80e3-459baa638969", Controller:(*bool)(0xc0024260da), BlockOwnerDeletion:(*bool)(0xc0024260db)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:34:52.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5334" for this suite. Mar 30 14:34:58.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:34:58.851: INFO: namespace gc-5334 deletion completed in 6.085185677s • [SLOW TEST:11.310 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 30 14:34:58.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9338 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9338 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9338 Mar 30 14:34:58.950: INFO: Found 0 stateful pods, waiting for 1 Mar 30 14:35:08.956: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 30 14:35:08.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:35:11.442: INFO: stderr: "I0330 14:35:11.317265 2989 log.go:172] (0xc000b5e420) (0xc00052cb40) Create stream\nI0330 14:35:11.317309 2989 log.go:172] (0xc000b5e420) (0xc00052cb40) Stream added, broadcasting: 1\nI0330 14:35:11.319947 2989 log.go:172] (0xc000b5e420) Reply frame received for 1\nI0330 14:35:11.319990 2989 log.go:172] (0xc000b5e420) (0xc000950000) Create stream\nI0330 14:35:11.320002 2989 log.go:172] (0xc000b5e420) (0xc000950000) Stream added, broadcasting: 3\nI0330 14:35:11.321041 2989 log.go:172] (0xc000b5e420) Reply frame received for 3\nI0330 14:35:11.321265 2989 log.go:172] (0xc000b5e420) (0xc000a4e000) Create stream\nI0330 14:35:11.321309 2989 log.go:172] (0xc000b5e420) (0xc000a4e000) Stream added, broadcasting: 5\nI0330 14:35:11.322515 2989 log.go:172] (0xc000b5e420) Reply frame received for 5\nI0330 14:35:11.407374 2989 log.go:172] (0xc000b5e420) Data frame received for 5\nI0330 14:35:11.407402 2989 log.go:172] (0xc000a4e000) (5) Data frame handling\nI0330 14:35:11.407419 2989 log.go:172] (0xc000a4e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:35:11.434556 2989 log.go:172] (0xc000b5e420) Data frame received for 3\nI0330 14:35:11.434594 2989 log.go:172] (0xc000950000) (3) Data frame handling\nI0330 14:35:11.434617 2989 log.go:172] (0xc000950000) (3) Data frame sent\nI0330 14:35:11.434891 2989 log.go:172] (0xc000b5e420) Data frame received for 3\nI0330 14:35:11.434921 2989 log.go:172] (0xc000950000) (3) Data frame handling\nI0330 14:35:11.434962 2989 log.go:172] (0xc000b5e420) Data frame received for 5\nI0330 14:35:11.434977 2989 log.go:172] (0xc000a4e000) (5) Data frame handling\nI0330 14:35:11.436634 2989 log.go:172] (0xc000b5e420) Data frame received for 1\nI0330 14:35:11.436655 2989 log.go:172] (0xc00052cb40) (1) Data frame handling\nI0330 14:35:11.436669 2989 log.go:172] (0xc00052cb40) (1) Data frame sent\nI0330 14:35:11.436687 2989 log.go:172] (0xc000b5e420) (0xc00052cb40) Stream removed, broadcasting: 1\nI0330 14:35:11.436713 2989 log.go:172] (0xc000b5e420) Go away received\nI0330 14:35:11.437317 2989 log.go:172] (0xc000b5e420) (0xc00052cb40) Stream removed, broadcasting: 1\nI0330 14:35:11.437351 2989 log.go:172] (0xc000b5e420) (0xc000950000) Stream removed, broadcasting: 3\nI0330 14:35:11.437371 2989 log.go:172] (0xc000b5e420) (0xc000a4e000) Stream removed, broadcasting: 5\n" Mar 30 14:35:11.442: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:35:11.443: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:35:11.446: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 30 14:35:21.451: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:35:21.451: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:35:21.486: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:21.486: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:21.486: INFO: Mar 30 14:35:21.486: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 30 14:35:22.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977739824s Mar 30 14:35:23.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.973185591s Mar 30 14:35:24.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.875048071s Mar 30 14:35:25.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.87021971s Mar 30 14:35:26.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.864795406s Mar 30 14:35:27.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.859848764s Mar 30 14:35:28.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.854509473s Mar 30 14:35:29.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.849324434s Mar 30 14:35:30.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 844.288117ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9338 Mar 30 14:35:31.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:35:31.844: INFO: stderr: "I0330 14:35:31.762322 3022 log.go:172] (0xc0009402c0) (0xc00081a640) Create stream\nI0330 14:35:31.762396 3022 log.go:172] (0xc0009402c0) (0xc00081a640) Stream added, broadcasting: 1\nI0330 14:35:31.764747 3022 log.go:172] (0xc0009402c0) Reply frame received for 1\nI0330 14:35:31.764777 3022 log.go:172] (0xc0009402c0) (0xc00097e000) Create stream\nI0330 14:35:31.764786 3022 log.go:172] (0xc0009402c0) (0xc00097e000) Stream added, broadcasting: 3\nI0330 14:35:31.765966 3022 log.go:172] (0xc0009402c0) Reply frame received for 3\nI0330 14:35:31.766017 3022 log.go:172] (0xc0009402c0) (0xc0006ae280) Create stream\nI0330 14:35:31.766030 3022 log.go:172] (0xc0009402c0) (0xc0006ae280) Stream added, broadcasting: 5\nI0330 14:35:31.767117 3022 log.go:172] (0xc0009402c0) Reply frame received for 5\nI0330 14:35:31.834635 3022 log.go:172] (0xc0009402c0) Data frame received for 5\nI0330 14:35:31.834662 3022 log.go:172] (0xc0006ae280) (5) Data frame handling\nI0330 14:35:31.834681 3022 log.go:172] (0xc0006ae280) (5) Data frame sent\nI0330 14:35:31.834702 3022 log.go:172] (0xc0009402c0) Data frame received for 5\nI0330 14:35:31.834714 3022 log.go:172] (0xc0006ae280) (5) Data frame handling\nI0330 14:35:31.834727 3022 log.go:172] (0xc0009402c0) Data frame received for 3\nI0330 14:35:31.834734 3022 log.go:172] (0xc00097e000) (3) Data frame handling\nI0330 14:35:31.834743 3022 log.go:172] (0xc00097e000) (3) Data frame sent\nI0330 14:35:31.834750 3022 log.go:172] (0xc0009402c0) Data frame received for 3\nI0330 14:35:31.834756 3022 log.go:172] (0xc00097e000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0330 14:35:31.836705 3022 log.go:172] (0xc0009402c0) Data frame received for 1\nI0330 14:35:31.836741 3022 log.go:172] (0xc00081a640) (1) Data frame handling\nI0330 14:35:31.836763 3022 log.go:172] (0xc00081a640) (1) Data frame sent\nI0330 14:35:31.836784 3022 log.go:172] (0xc0009402c0) (0xc00081a640) Stream removed, broadcasting: 1\nI0330 14:35:31.836809 3022 log.go:172] (0xc0009402c0) Go away received\nI0330 14:35:31.837369 3022 log.go:172] (0xc0009402c0) (0xc00081a640) Stream removed, broadcasting: 1\nI0330 14:35:31.837392 3022 log.go:172] (0xc0009402c0) (0xc00097e000) Stream removed, broadcasting: 3\nI0330 14:35:31.837404 3022 log.go:172] (0xc0009402c0) (0xc0006ae280) Stream removed, broadcasting: 5\n" Mar 30 14:35:31.845: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:35:31.845: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:35:31.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:35:32.047: INFO: stderr: "I0330 14:35:31.964542 3044 log.go:172] (0xc00089e580) (0xc00040ea00) Create stream\nI0330 14:35:31.964606 3044 log.go:172] (0xc00089e580) (0xc00040ea00) Stream added, broadcasting: 1\nI0330 14:35:31.967241 3044 log.go:172] (0xc00089e580) Reply frame received for 1\nI0330 14:35:31.967280 3044 log.go:172] (0xc00089e580) (0xc0004e0140) Create stream\nI0330 14:35:31.967298 3044 log.go:172] (0xc00089e580) (0xc0004e0140) Stream added, broadcasting: 3\nI0330 14:35:31.968235 3044 log.go:172] (0xc00089e580) Reply frame received for 3\nI0330 14:35:31.968274 3044 log.go:172] (0xc00089e580) (0xc0008a8000) Create stream\nI0330 14:35:31.968294 3044 log.go:172] (0xc00089e580) (0xc0008a8000) Stream added, broadcasting: 5\nI0330 14:35:31.969319 3044 log.go:172] (0xc00089e580) Reply frame received for 5\nI0330 14:35:32.040227 3044 log.go:172] (0xc00089e580) Data frame received for 3\nI0330 14:35:32.040261 3044 log.go:172] (0xc0004e0140) (3) Data frame handling\nI0330 14:35:32.040275 3044 log.go:172] (0xc0004e0140) (3) Data frame sent\nI0330 14:35:32.040300 3044 log.go:172] (0xc00089e580) Data frame received for 3\nI0330 14:35:32.040332 3044 log.go:172] (0xc00089e580) Data frame received for 5\nI0330 14:35:32.040369 3044 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0330 14:35:32.040392 3044 log.go:172] (0xc0008a8000) (5) Data frame sent\nI0330 14:35:32.040412 3044 log.go:172] (0xc00089e580) Data frame received for 5\nI0330 14:35:32.040428 3044 log.go:172] (0xc0008a8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0330 14:35:32.040443 3044 log.go:172] (0xc0004e0140) (3) Data frame handling\nI0330 14:35:32.042179 3044 log.go:172] (0xc00089e580) Data frame received for 1\nI0330 14:35:32.042201 3044 log.go:172] (0xc00040ea00) (1) Data frame handling\nI0330 14:35:32.042213 3044 log.go:172] (0xc00040ea00) (1) Data frame sent\nI0330 14:35:32.042225 3044 log.go:172] (0xc00089e580) (0xc00040ea00) Stream removed, broadcasting: 1\nI0330 14:35:32.042588 3044 log.go:172] (0xc00089e580) (0xc00040ea00) Stream removed, broadcasting: 1\nI0330 14:35:32.042614 3044 log.go:172] (0xc00089e580) (0xc0004e0140) Stream removed, broadcasting: 3\nI0330 14:35:32.042625 3044 log.go:172] (0xc00089e580) (0xc0008a8000) Stream removed, broadcasting: 5\n" Mar 30 14:35:32.047: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:35:32.047: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:35:32.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 30 14:35:32.240: INFO: stderr: "I0330 14:35:32.174098 3063 log.go:172] (0xc000698c60) (0xc000672c80) Create stream\nI0330 14:35:32.174150 3063 log.go:172] (0xc000698c60) (0xc000672c80) Stream added, broadcasting: 1\nI0330 14:35:32.178627 3063 log.go:172] (0xc000698c60) Reply frame received for 1\nI0330 14:35:32.178677 3063 log.go:172] (0xc000698c60) (0xc0006723c0) Create stream\nI0330 14:35:32.178693 3063 log.go:172] (0xc000698c60) (0xc0006723c0) Stream added, broadcasting: 3\nI0330 14:35:32.179729 3063 log.go:172] (0xc000698c60) Reply frame received for 3\nI0330 14:35:32.179766 3063 log.go:172] (0xc000698c60) (0xc0001d6000) Create stream\nI0330 14:35:32.179782 3063 log.go:172] (0xc000698c60) (0xc0001d6000) Stream added, broadcasting: 5\nI0330 14:35:32.180690 3063 log.go:172] (0xc000698c60) Reply frame received for 5\nI0330 14:35:32.232866 3063 log.go:172] (0xc000698c60) Data frame received for 3\nI0330 14:35:32.232934 3063 log.go:172] (0xc0006723c0) (3) Data frame handling\nI0330 14:35:32.232954 3063 log.go:172] (0xc0006723c0) (3) Data frame sent\nI0330 14:35:32.232966 3063 log.go:172] (0xc000698c60) Data frame received for 3\nI0330 14:35:32.232977 3063 log.go:172] (0xc0006723c0) (3) Data frame handling\nI0330 14:35:32.233001 3063 log.go:172] (0xc000698c60) Data frame received for 5\nI0330 14:35:32.233034 3063 log.go:172] (0xc0001d6000) (5) Data frame handling\nI0330 14:35:32.233062 3063 log.go:172] (0xc0001d6000) (5) Data frame sent\nI0330 14:35:32.233076 3063 log.go:172] (0xc000698c60) Data frame received for 5\nI0330 14:35:32.233086 3063 log.go:172] (0xc0001d6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0330 14:35:32.234932 3063 log.go:172] (0xc000698c60) Data frame received for 1\nI0330 14:35:32.234958 3063 log.go:172] (0xc000672c80) (1) Data frame handling\nI0330 14:35:32.234969 3063 log.go:172] (0xc000672c80) (1) Data frame sent\nI0330 14:35:32.234981 3063 log.go:172] (0xc000698c60) (0xc000672c80) Stream removed, broadcasting: 1\nI0330 14:35:32.235293 3063 log.go:172] (0xc000698c60) (0xc000672c80) Stream removed, broadcasting: 1\nI0330 14:35:32.235320 3063 log.go:172] (0xc000698c60) (0xc0006723c0) Stream removed, broadcasting: 3\nI0330 14:35:32.235331 3063 log.go:172] (0xc000698c60) (0xc0001d6000) Stream removed, broadcasting: 5\n" Mar 30 14:35:32.240: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 30 14:35:32.240: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 30 14:35:32.244: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 14:35:32.244: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 14:35:32.244: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 30 14:35:32.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:35:32.438: INFO: stderr: "I0330 14:35:32.373419 3083 log.go:172] (0xc0009a2630) (0xc000594a00) Create stream\nI0330 14:35:32.373485 3083 log.go:172] (0xc0009a2630) (0xc000594a00) Stream added, broadcasting: 1\nI0330 14:35:32.377637 3083 log.go:172] (0xc0009a2630) Reply frame received for 1\nI0330 14:35:32.377681 3083 log.go:172] (0xc0009a2630) (0xc000690000) Create stream\nI0330 14:35:32.377694 3083 log.go:172] (0xc0009a2630) (0xc000690000) Stream added, broadcasting: 3\nI0330 14:35:32.378707 3083 log.go:172] (0xc0009a2630) Reply frame received for 3\nI0330 14:35:32.378742 3083 log.go:172] (0xc0009a2630) (0xc000594280) Create stream\nI0330 14:35:32.378754 3083 log.go:172] (0xc0009a2630) (0xc000594280) Stream added, broadcasting: 5\nI0330 14:35:32.379676 3083 log.go:172] (0xc0009a2630) Reply frame received for 5\nI0330 14:35:32.431486 3083 log.go:172] (0xc0009a2630) Data frame received for 5\nI0330 14:35:32.431524 3083 log.go:172] (0xc000594280) (5) Data frame handling\nI0330 14:35:32.431538 3083 log.go:172] (0xc000594280) (5) Data frame sent\nI0330 14:35:32.431549 3083 log.go:172] (0xc0009a2630) Data frame received for 5\nI0330 14:35:32.431559 3083 log.go:172] (0xc000594280) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:35:32.431583 3083 log.go:172] (0xc0009a2630) Data frame received for 3\nI0330 14:35:32.431593 3083 log.go:172] (0xc000690000) (3) Data frame handling\nI0330 14:35:32.431610 3083 log.go:172] (0xc000690000) (3) Data frame sent\nI0330 14:35:32.431623 3083 log.go:172] (0xc0009a2630) Data frame received for 3\nI0330 14:35:32.431633 3083 log.go:172] (0xc000690000) (3) Data frame handling\nI0330 14:35:32.433512 3083 log.go:172] (0xc0009a2630) Data frame received for 1\nI0330 14:35:32.433562 3083 log.go:172] (0xc000594a00) (1) Data frame handling\nI0330 14:35:32.433603 3083 log.go:172] (0xc000594a00) (1) Data frame sent\nI0330 14:35:32.433627 3083 log.go:172] (0xc0009a2630) (0xc000594a00) Stream removed, broadcasting: 1\nI0330 14:35:32.433645 3083 log.go:172] (0xc0009a2630) Go away received\nI0330 14:35:32.434062 3083 log.go:172] (0xc0009a2630) (0xc000594a00) Stream removed, broadcasting: 1\nI0330 14:35:32.434086 3083 log.go:172] (0xc0009a2630) (0xc000690000) Stream removed, broadcasting: 3\nI0330 14:35:32.434098 3083 log.go:172] (0xc0009a2630) (0xc000594280) Stream removed, broadcasting: 5\n" Mar 30 14:35:32.438: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:35:32.438: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:35:32.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:35:32.697: INFO: stderr: "I0330 14:35:32.585372 3104 log.go:172] (0xc000ab2420) (0xc000686960) Create stream\nI0330 14:35:32.585820 3104 log.go:172] (0xc000ab2420) (0xc000686960) Stream added, broadcasting: 1\nI0330 14:35:32.593537 3104 log.go:172] (0xc000ab2420) Reply frame received for 1\nI0330 14:35:32.593582 3104 log.go:172] (0xc000ab2420) (0xc000686140) Create stream\nI0330 14:35:32.593593 3104 log.go:172] (0xc000ab2420) (0xc000686140) Stream added, broadcasting: 3\nI0330 14:35:32.594674 3104 log.go:172] (0xc000ab2420) Reply frame received for 3\nI0330 14:35:32.594719 3104 log.go:172] (0xc000ab2420) (0xc00034a000) Create stream\nI0330 14:35:32.594737 3104 log.go:172] (0xc000ab2420) (0xc00034a000) Stream added, broadcasting: 5\nI0330 14:35:32.595566 3104 log.go:172] (0xc000ab2420) Reply frame received for 5\nI0330 14:35:32.648856 3104 log.go:172] (0xc000ab2420) Data frame received for 5\nI0330 14:35:32.648900 3104 log.go:172] (0xc00034a000) (5) Data frame handling\nI0330 14:35:32.648922 3104 log.go:172] (0xc00034a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:35:32.689987 3104 log.go:172] (0xc000ab2420) Data frame received for 3\nI0330 14:35:32.690022 3104 log.go:172] (0xc000686140) (3) Data frame handling\nI0330 14:35:32.690066 3104 log.go:172] (0xc000686140) (3) Data frame sent\nI0330 14:35:32.690329 3104 log.go:172] (0xc000ab2420) Data frame received for 3\nI0330 14:35:32.690423 3104 log.go:172] (0xc000686140) (3) Data frame handling\nI0330 14:35:32.690457 3104 log.go:172] (0xc000ab2420) Data frame received for 5\nI0330 14:35:32.690495 3104 log.go:172] (0xc00034a000) (5) Data frame handling\nI0330 14:35:32.692465 3104 log.go:172] (0xc000ab2420) Data frame received for 1\nI0330 14:35:32.692499 3104 log.go:172] (0xc000686960) (1) Data frame handling\nI0330 14:35:32.692531 3104 log.go:172] (0xc000686960) (1) Data frame sent\nI0330 14:35:32.692558 3104 log.go:172] (0xc000ab2420) (0xc000686960) Stream removed, broadcasting: 1\nI0330 14:35:32.692653 3104 log.go:172] (0xc000ab2420) Go away received\nI0330 14:35:32.693025 3104 log.go:172] (0xc000ab2420) (0xc000686960) Stream removed, broadcasting: 1\nI0330 14:35:32.693050 3104 log.go:172] (0xc000ab2420) (0xc000686140) Stream removed, broadcasting: 3\nI0330 14:35:32.693067 3104 log.go:172] (0xc000ab2420) (0xc00034a000) Stream removed, broadcasting: 5\n" Mar 30 14:35:32.697: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:35:32.697: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:35:32.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9338 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 30 14:35:32.925: INFO: stderr: "I0330 14:35:32.826616 3125 log.go:172] (0xc000966420) (0xc0005a46e0) Create stream\nI0330 14:35:32.826674 3125 log.go:172] (0xc000966420) (0xc0005a46e0) Stream added, broadcasting: 1\nI0330 14:35:32.829938 3125 log.go:172] (0xc000966420) Reply frame received for 1\nI0330 14:35:32.830084 3125 log.go:172] (0xc000966420) (0xc0007e6000) Create stream\nI0330 14:35:32.830148 3125 log.go:172] (0xc000966420) (0xc0007e6000) Stream added, broadcasting: 3\nI0330 14:35:32.831818 3125 log.go:172] (0xc000966420) Reply frame received for 3\nI0330 14:35:32.831882 3125 log.go:172] (0xc000966420) (0xc0005a4000) Create stream\nI0330 14:35:32.831903 3125 log.go:172] (0xc000966420) (0xc0005a4000) Stream added, broadcasting: 5\nI0330 14:35:32.832837 3125 log.go:172] (0xc000966420) Reply frame received for 5\nI0330 14:35:32.889751 3125 log.go:172] (0xc000966420) Data frame received for 5\nI0330 14:35:32.889783 3125 log.go:172] (0xc0005a4000) (5) Data frame handling\nI0330 14:35:32.889802 3125 log.go:172] (0xc0005a4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0330 14:35:32.917687 3125 log.go:172] (0xc000966420) Data frame received for 3\nI0330 14:35:32.917733 3125 log.go:172] (0xc0007e6000) (3) Data frame handling\nI0330 14:35:32.917808 3125 log.go:172] (0xc0007e6000) (3) Data frame sent\nI0330 14:35:32.918059 3125 log.go:172] (0xc000966420) Data frame received for 3\nI0330 14:35:32.918100 3125 log.go:172] (0xc0007e6000) (3) Data frame handling\nI0330 14:35:32.918145 3125 log.go:172] (0xc000966420) Data frame received for 5\nI0330 14:35:32.918163 3125 log.go:172] (0xc0005a4000) (5) Data frame handling\nI0330 14:35:32.920023 3125 log.go:172] (0xc000966420) Data frame received for 1\nI0330 14:35:32.920066 3125 log.go:172] (0xc0005a46e0) (1) Data frame handling\nI0330 14:35:32.920121 3125 log.go:172] (0xc0005a46e0) (1) Data frame sent\nI0330 14:35:32.920143 3125 log.go:172] (0xc000966420) (0xc0005a46e0) Stream removed, broadcasting: 1\nI0330 14:35:32.920166 3125 log.go:172] (0xc000966420) Go away received\nI0330 14:35:32.920656 3125 log.go:172] (0xc000966420) (0xc0005a46e0) Stream removed, broadcasting: 1\nI0330 14:35:32.920681 3125 log.go:172] (0xc000966420) (0xc0007e6000) Stream removed, broadcasting: 3\nI0330 14:35:32.920693 3125 log.go:172] (0xc000966420) (0xc0005a4000) Stream removed, broadcasting: 5\n" Mar 30 14:35:32.925: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 30 14:35:32.925: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 30 14:35:32.925: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:35:32.929: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 30 14:35:42.937: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:35:42.937: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:35:42.937: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 30 14:35:42.952: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:42.952: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:42.952: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:42.952: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:42.952: INFO: Mar 30 14:35:42.952: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:44.022: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:44.022: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:44.022: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:44.022: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:44.022: INFO: Mar 30 14:35:44.022: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:45.028: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:45.028: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:45.028: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:45.028: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:45.028: INFO: Mar 30 14:35:45.028: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:46.034: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:46.034: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:46.034: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:46.034: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:46.034: INFO: Mar 30 14:35:46.034: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:47.039: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:47.039: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:47.039: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:47.039: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:47.039: INFO: Mar 30 14:35:47.039: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:48.044: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:48.044: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:48.044: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:48.044: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:48.044: INFO: Mar 30 14:35:48.044: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:49.049: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:49.049: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:49.049: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:49.049: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:49.050: INFO: Mar 30 14:35:49.050: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:50.055: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:50.055: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:50.055: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:50.055: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:50.056: INFO: Mar 30 14:35:50.056: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:51.061: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:51.061: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:51.061: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:51.061: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:51.061: INFO: Mar 30 14:35:51.062: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 14:35:52.066: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 14:35:52.066: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:34:58 +0000 UTC }] Mar 30 14:35:52.067: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 14:35:21 +0000 UTC }] Mar 30 14:35:52.067: INFO: Mar 30 14:35:52.067: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9338 Mar 30 14:35:53.072: INFO: Scaling statefulset ss to 0 Mar 30 14:35:53.079: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 30 14:35:53.081: INFO: Deleting all statefulset in ns statefulset-9338 Mar 30 14:35:53.083: INFO: Scaling statefulset ss to 0 Mar 30 14:35:53.092: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 14:35:53.095: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 30 14:35:53.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9338" for this suite. Mar 30 14:35:59.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 30 14:35:59.199: INFO: namespace statefulset-9338 deletion completed in 6.090091541s • [SLOW TEST:60.348 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSMar 30 14:35:59.200: INFO: Running AfterSuite actions on all nodes Mar 30 14:35:59.200: INFO: Running AfterSuite actions on node 1 Mar 30 14:35:59.200: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6015.203 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS