I0508 12:55:44.007075 6 e2e.go:243] Starting e2e run "a8795f82-b82d-40be-9f6f-159a104a1952" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588942543 - Will randomize all specs Will run 215 of 4412 specs May 8 12:55:44.209: INFO: >>> kubeConfig: /root/.kube/config May 8 12:55:44.211: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 8 12:55:44.288: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 12:55:44.323: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 12:55:44.323: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 8 12:55:44.323: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 8 12:55:44.396: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 8 12:55:44.396: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 8 12:55:44.396: INFO: e2e test version: v1.15.11 May 8 12:55:44.398: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:55:44.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 8 12:55:44.439: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:55:44.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7935" for this suite. May 8 12:56:06.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:56:06.646: INFO: namespace pods-7935 deletion completed in 22.118569728s • [SLOW TEST:22.247 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:56:06.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 12:56:06.727: INFO: Waiting up to 5m0s for pod "pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab" in namespace "emptydir-9908" to be "success or failure" May 8 12:56:06.749: INFO: Pod "pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 22.571181ms May 8 12:56:08.753: INFO: Pod "pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026480258s May 8 12:56:10.757: INFO: Pod "pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030203588s STEP: Saw pod success May 8 12:56:10.757: INFO: Pod "pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab" satisfied condition "success or failure" May 8 12:56:10.760: INFO: Trying to get logs from node iruya-worker pod pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab container test-container: STEP: delete the pod May 8 12:56:10.800: INFO: Waiting for pod pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab to disappear May 8 12:56:10.922: INFO: Pod pod-3344d281-be5d-4f61-b9a4-f6197b0ec3ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:56:10.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9908" for this suite. May 8 12:56:16.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:56:17.062: INFO: namespace emptydir-9908 deletion completed in 6.135645274s • [SLOW TEST:10.415 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:56:17.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 12:56:17.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5008' May 8 12:56:20.626: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 12:56:20.626: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 8 12:56:20.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-w4cfv] May 8 12:56:20.678: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-w4cfv" in namespace "kubectl-5008" to be "running and ready" May 8 12:56:20.692: INFO: Pod "e2e-test-nginx-rc-w4cfv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.019938ms May 8 12:56:22.697: INFO: Pod "e2e-test-nginx-rc-w4cfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018746782s May 8 12:56:24.701: INFO: Pod "e2e-test-nginx-rc-w4cfv": Phase="Running", Reason="", readiness=true. Elapsed: 4.023073737s May 8 12:56:24.701: INFO: Pod "e2e-test-nginx-rc-w4cfv" satisfied condition "running and ready" May 8 12:56:24.701: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-w4cfv] May 8 12:56:24.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5008' May 8 12:56:24.858: INFO: stderr: "" May 8 12:56:24.858: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 8 12:56:24.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5008' May 8 12:56:24.953: INFO: stderr: "" May 8 12:56:24.953: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:56:24.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5008" for this suite. May 8 12:56:30.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:56:31.067: INFO: namespace kubectl-5008 deletion completed in 6.110818531s • [SLOW TEST:14.005 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:56:31.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4727 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 12:56:31.120: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 12:56:57.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostName&protocol=udp&host=10.244.2.36&port=8081&tries=1'] Namespace:pod-network-test-4727 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:56:57.326: INFO: >>> kubeConfig: /root/.kube/config I0508 12:56:57.359747 6 log.go:172] (0xc0006ab1e0) (0xc00223f7c0) Create stream I0508 12:56:57.359780 6 log.go:172] (0xc0006ab1e0) (0xc00223f7c0) Stream added, broadcasting: 1 I0508 12:56:57.362059 6 log.go:172] (0xc0006ab1e0) Reply frame received for 1 I0508 12:56:57.362098 6 log.go:172] (0xc0006ab1e0) (0xc001042780) Create stream I0508 12:56:57.362110 6 log.go:172] (0xc0006ab1e0) (0xc001042780) Stream added, broadcasting: 3 I0508 12:56:57.363051 6 log.go:172] (0xc0006ab1e0) Reply frame received for 3 I0508 12:56:57.363091 6 log.go:172] (0xc0006ab1e0) (0xc00223f900) Create stream I0508 12:56:57.363109 6 log.go:172] (0xc0006ab1e0) (0xc00223f900) Stream added, broadcasting: 5 I0508 12:56:57.363871 6 log.go:172] (0xc0006ab1e0) Reply frame received for 5 I0508 12:56:57.507461 6 log.go:172] (0xc0006ab1e0) Data frame received for 3 I0508 12:56:57.507504 6 log.go:172] (0xc001042780) (3) Data frame handling I0508 12:56:57.507527 6 log.go:172] (0xc001042780) (3) Data frame sent I0508 12:56:57.508031 6 log.go:172] (0xc0006ab1e0) Data frame received for 3 I0508 12:56:57.508058 6 log.go:172] (0xc001042780) (3) Data frame handling I0508 12:56:57.508092 6 log.go:172] (0xc0006ab1e0) Data frame received for 5 I0508 12:56:57.508127 6 log.go:172] (0xc00223f900) (5) Data frame handling I0508 12:56:57.509921 6 log.go:172] (0xc0006ab1e0) Data frame received for 1 I0508 12:56:57.509934 6 log.go:172] (0xc00223f7c0) (1) Data frame handling I0508 12:56:57.509940 6 log.go:172] (0xc00223f7c0) (1) Data frame sent I0508 12:56:57.510327 6 log.go:172] (0xc0006ab1e0) (0xc00223f7c0) Stream removed, broadcasting: 1 I0508 12:56:57.510344 6 log.go:172] (0xc0006ab1e0) Go away received I0508 12:56:57.510624 6 log.go:172] (0xc0006ab1e0) (0xc00223f7c0) Stream removed, broadcasting: 1 I0508 12:56:57.510639 6 log.go:172] (0xc0006ab1e0) (0xc001042780) Stream removed, broadcasting: 3 I0508 12:56:57.510644 6 log.go:172] (0xc0006ab1e0) (0xc00223f900) Stream removed, broadcasting: 5 May 8 12:56:57.510: INFO: Waiting for endpoints: map[] May 8 12:56:57.513: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostName&protocol=udp&host=10.244.1.207&port=8081&tries=1'] Namespace:pod-network-test-4727 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:56:57.513: INFO: >>> kubeConfig: /root/.kube/config I0508 12:56:57.541567 6 log.go:172] (0xc0022f2bb0) (0xc001c22500) Create stream I0508 12:56:57.541589 6 log.go:172] (0xc0022f2bb0) (0xc001c22500) Stream added, broadcasting: 1 I0508 12:56:57.543024 6 log.go:172] (0xc0022f2bb0) Reply frame received for 1 I0508 12:56:57.543054 6 log.go:172] (0xc0022f2bb0) (0xc00223f9a0) Create stream I0508 12:56:57.543065 6 log.go:172] (0xc0022f2bb0) (0xc00223f9a0) Stream added, broadcasting: 3 I0508 12:56:57.543907 6 log.go:172] (0xc0022f2bb0) Reply frame received for 3 I0508 12:56:57.543939 6 log.go:172] (0xc0022f2bb0) (0xc001c22640) Create stream I0508 12:56:57.543951 6 log.go:172] (0xc0022f2bb0) (0xc001c22640) Stream added, broadcasting: 5 I0508 12:56:57.544659 6 log.go:172] (0xc0022f2bb0) Reply frame received for 5 I0508 12:56:57.604360 6 log.go:172] (0xc0022f2bb0) Data frame received for 3 I0508 12:56:57.604387 6 log.go:172] (0xc00223f9a0) (3) Data frame handling I0508 12:56:57.604403 6 log.go:172] (0xc00223f9a0) (3) Data frame sent I0508 12:56:57.605085 6 log.go:172] (0xc0022f2bb0) Data frame received for 3 I0508 12:56:57.605100 6 log.go:172] (0xc00223f9a0) (3) Data frame handling I0508 12:56:57.605654 6 log.go:172] (0xc0022f2bb0) Data frame received for 5 I0508 12:56:57.605712 6 log.go:172] (0xc001c22640) (5) Data frame handling I0508 12:56:57.606759 6 log.go:172] (0xc0022f2bb0) Data frame received for 1 I0508 12:56:57.606778 6 log.go:172] (0xc001c22500) (1) Data frame handling I0508 12:56:57.606794 6 log.go:172] (0xc001c22500) (1) Data frame sent I0508 12:56:57.606814 6 log.go:172] (0xc0022f2bb0) (0xc001c22500) Stream removed, broadcasting: 1 I0508 12:56:57.606845 6 log.go:172] (0xc0022f2bb0) Go away received I0508 12:56:57.606953 6 log.go:172] (0xc0022f2bb0) (0xc001c22500) Stream removed, broadcasting: 1 I0508 12:56:57.606974 6 log.go:172] (0xc0022f2bb0) (0xc00223f9a0) Stream removed, broadcasting: 3 I0508 12:56:57.606989 6 log.go:172] (0xc0022f2bb0) (0xc001c22640) Stream removed, broadcasting: 5 May 8 12:56:57.607: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:56:57.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4727" for this suite. May 8 12:57:19.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:57:19.762: INFO: namespace pod-network-test-4727 deletion completed in 22.150872131s • [SLOW TEST:48.695 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:57:19.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 12:57:19.898: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:19.922: INFO: Number of nodes with available pods: 0 May 8 12:57:19.922: INFO: Node iruya-worker is running more than one daemon pod May 8 12:57:20.935: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:20.939: INFO: Number of nodes with available pods: 0 May 8 12:57:20.939: INFO: Node iruya-worker is running more than one daemon pod May 8 12:57:22.044: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:22.048: INFO: Number of nodes with available pods: 0 May 8 12:57:22.048: INFO: Node iruya-worker is running more than one daemon pod May 8 12:57:23.026: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:23.030: INFO: Number of nodes with available pods: 0 May 8 12:57:23.030: INFO: Node iruya-worker is running more than one daemon pod May 8 12:57:23.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:23.931: INFO: Number of nodes with available pods: 2 May 8 12:57:23.931: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 8 12:57:23.955: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:23.957: INFO: Number of nodes with available pods: 1 May 8 12:57:23.957: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:24.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:24.966: INFO: Number of nodes with available pods: 1 May 8 12:57:24.966: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:25.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:25.965: INFO: Number of nodes with available pods: 1 May 8 12:57:25.965: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:26.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:26.966: INFO: Number of nodes with available pods: 1 May 8 12:57:26.966: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:27.961: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:27.982: INFO: Number of nodes with available pods: 1 May 8 12:57:27.982: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:28.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:28.966: INFO: Number of nodes with available pods: 1 May 8 12:57:28.966: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:29.963: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:29.965: INFO: Number of nodes with available pods: 1 May 8 12:57:29.965: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:30.978: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:30.981: INFO: Number of nodes with available pods: 1 May 8 12:57:30.981: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:31.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:31.970: INFO: Number of nodes with available pods: 1 May 8 12:57:31.970: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:32.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:32.965: INFO: Number of nodes with available pods: 1 May 8 12:57:32.965: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:33.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:33.964: INFO: Number of nodes with available pods: 1 May 8 12:57:33.964: INFO: Node iruya-worker2 is running more than one daemon pod May 8 12:57:34.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 12:57:34.966: INFO: Number of nodes with available pods: 2 May 8 12:57:34.966: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9204, will wait for the garbage collector to delete the pods May 8 12:57:35.082: INFO: Deleting DaemonSet.extensions daemon-set took: 60.473753ms May 8 12:57:35.382: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.309464ms May 8 12:57:42.210: INFO: Number of nodes with available pods: 0 May 8 12:57:42.210: INFO: Number of running nodes: 0, number of available pods: 0 May 8 12:57:42.216: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9204/daemonsets","resourceVersion":"9705702"},"items":null} May 8 12:57:42.220: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9204/pods","resourceVersion":"9705702"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:57:42.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9204" for this suite. May 8 12:57:48.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:57:48.335: INFO: namespace daemonsets-9204 deletion completed in 6.100853358s • [SLOW TEST:28.572 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:57:48.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 8 12:57:48.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1765' May 8 12:57:48.757: INFO: stderr: "" May 8 12:57:48.757: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 8 12:57:49.762: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:49.762: INFO: Found 0 / 1 May 8 12:57:50.762: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:50.762: INFO: Found 0 / 1 May 8 12:57:51.762: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:51.762: INFO: Found 0 / 1 May 8 12:57:52.761: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:52.761: INFO: Found 1 / 1 May 8 12:57:52.761: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 8 12:57:52.763: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:52.763: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 12:57:52.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-mw4pc --namespace=kubectl-1765 -p {"metadata":{"annotations":{"x":"y"}}}' May 8 12:57:52.857: INFO: stderr: "" May 8 12:57:52.857: INFO: stdout: "pod/redis-master-mw4pc patched\n" STEP: checking annotations May 8 12:57:52.877: INFO: Selector matched 1 pods for map[app:redis] May 8 12:57:52.877: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:57:52.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1765" for this suite. May 8 12:58:14.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:58:14.959: INFO: namespace kubectl-1765 deletion completed in 22.077860302s • [SLOW TEST:26.624 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:58:14.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-725 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 12:58:15.051: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 12:58:35.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.40:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-725 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:58:35.233: INFO: >>> kubeConfig: /root/.kube/config I0508 12:58:35.261297 6 log.go:172] (0xc0007e6d10) (0xc000aa9720) Create stream I0508 12:58:35.261322 6 log.go:172] (0xc0007e6d10) (0xc000aa9720) Stream added, broadcasting: 1 I0508 12:58:35.263155 6 log.go:172] (0xc0007e6d10) Reply frame received for 1 I0508 12:58:35.263193 6 log.go:172] (0xc0007e6d10) (0xc002319540) Create stream I0508 12:58:35.263206 6 log.go:172] (0xc0007e6d10) (0xc002319540) Stream added, broadcasting: 3 I0508 12:58:35.264112 6 log.go:172] (0xc0007e6d10) Reply frame received for 3 I0508 12:58:35.264147 6 log.go:172] (0xc0007e6d10) (0xc0003b0000) Create stream I0508 12:58:35.264158 6 log.go:172] (0xc0007e6d10) (0xc0003b0000) Stream added, broadcasting: 5 I0508 12:58:35.264934 6 log.go:172] (0xc0007e6d10) Reply frame received for 5 I0508 12:58:35.323130 6 log.go:172] (0xc0007e6d10) Data frame received for 3 I0508 12:58:35.323191 6 log.go:172] (0xc002319540) (3) Data frame handling I0508 12:58:35.323210 6 log.go:172] (0xc002319540) (3) Data frame sent I0508 12:58:35.323221 6 log.go:172] (0xc0007e6d10) Data frame received for 3 I0508 12:58:35.323236 6 log.go:172] (0xc002319540) (3) Data frame handling I0508 12:58:35.323271 6 log.go:172] (0xc0007e6d10) Data frame received for 5 I0508 12:58:35.323287 6 log.go:172] (0xc0003b0000) (5) Data frame handling I0508 12:58:35.325445 6 log.go:172] (0xc0007e6d10) Data frame received for 1 I0508 12:58:35.325480 6 log.go:172] (0xc000aa9720) (1) Data frame handling I0508 12:58:35.325505 6 log.go:172] (0xc000aa9720) (1) Data frame sent I0508 12:58:35.325701 6 log.go:172] (0xc0007e6d10) (0xc000aa9720) Stream removed, broadcasting: 1 I0508 12:58:35.325833 6 log.go:172] (0xc0007e6d10) (0xc000aa9720) Stream removed, broadcasting: 1 I0508 12:58:35.325858 6 log.go:172] (0xc0007e6d10) (0xc002319540) Stream removed, broadcasting: 3 I0508 12:58:35.325880 6 log.go:172] (0xc0007e6d10) (0xc0003b0000) Stream removed, broadcasting: 5 May 8 12:58:35.325: INFO: Found all expected endpoints: [netserver-0] I0508 12:58:35.325979 6 log.go:172] (0xc0007e6d10) Go away received May 8 12:58:35.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.210:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-725 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 12:58:35.329: INFO: >>> kubeConfig: /root/.kube/config I0508 12:58:35.362679 6 log.go:172] (0xc0006ab600) (0xc0003b0d20) Create stream I0508 12:58:35.362712 6 log.go:172] (0xc0006ab600) (0xc0003b0d20) Stream added, broadcasting: 1 I0508 12:58:35.365671 6 log.go:172] (0xc0006ab600) Reply frame received for 1 I0508 12:58:35.365744 6 log.go:172] (0xc0006ab600) (0xc0003b0dc0) Create stream I0508 12:58:35.365770 6 log.go:172] (0xc0006ab600) (0xc0003b0dc0) Stream added, broadcasting: 3 I0508 12:58:35.366922 6 log.go:172] (0xc0006ab600) Reply frame received for 3 I0508 12:58:35.366972 6 log.go:172] (0xc0006ab600) (0xc00058b9a0) Create stream I0508 12:58:35.366988 6 log.go:172] (0xc0006ab600) (0xc00058b9a0) Stream added, broadcasting: 5 I0508 12:58:35.367908 6 log.go:172] (0xc0006ab600) Reply frame received for 5 I0508 12:58:35.438504 6 log.go:172] (0xc0006ab600) Data frame received for 3 I0508 12:58:35.438546 6 log.go:172] (0xc0003b0dc0) (3) Data frame handling I0508 12:58:35.438578 6 log.go:172] (0xc0003b0dc0) (3) Data frame sent I0508 12:58:35.438597 6 log.go:172] (0xc0006ab600) Data frame received for 3 I0508 12:58:35.438615 6 log.go:172] (0xc0003b0dc0) (3) Data frame handling I0508 12:58:35.439091 6 log.go:172] (0xc0006ab600) Data frame received for 5 I0508 12:58:35.439118 6 log.go:172] (0xc00058b9a0) (5) Data frame handling I0508 12:58:35.443826 6 log.go:172] (0xc0006ab600) Data frame received for 1 I0508 12:58:35.443891 6 log.go:172] (0xc0003b0d20) (1) Data frame handling I0508 12:58:35.443954 6 log.go:172] (0xc0003b0d20) (1) Data frame sent I0508 12:58:35.444008 6 log.go:172] (0xc0006ab600) (0xc0003b0d20) Stream removed, broadcasting: 1 I0508 12:58:35.444062 6 log.go:172] (0xc0006ab600) Go away received I0508 12:58:35.444159 6 log.go:172] (0xc0006ab600) (0xc0003b0d20) Stream removed, broadcasting: 1 I0508 12:58:35.444190 6 log.go:172] (0xc0006ab600) (0xc0003b0dc0) Stream removed, broadcasting: 3 I0508 12:58:35.444207 6 log.go:172] (0xc0006ab600) (0xc00058b9a0) Stream removed, broadcasting: 5 May 8 12:58:35.444: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:58:35.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-725" for this suite. May 8 12:58:57.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:58:57.557: INFO: namespace pod-network-test-725 deletion completed in 22.1093703s • [SLOW TEST:42.598 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:58:57.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 12:58:57.642: INFO: Waiting up to 5m0s for pod "pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0" in namespace "emptydir-2095" to be "success or failure" May 8 12:58:57.658: INFO: Pod "pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.526307ms May 8 12:58:59.663: INFO: Pod "pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020791095s May 8 12:59:01.666: INFO: Pod "pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024574646s STEP: Saw pod success May 8 12:59:01.666: INFO: Pod "pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0" satisfied condition "success or failure" May 8 12:59:01.669: INFO: Trying to get logs from node iruya-worker2 pod pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0 container test-container: STEP: delete the pod May 8 12:59:01.715: INFO: Waiting for pod pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0 to disappear May 8 12:59:01.723: INFO: Pod pod-9e9f0459-75f1-4f7d-ae8f-da2a5e1e65d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:59:01.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2095" for this suite. May 8 12:59:07.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:59:07.816: INFO: namespace emptydir-2095 deletion completed in 6.088798518s • [SLOW TEST:10.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:59:07.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 12:59:07.878: INFO: Creating deployment "test-recreate-deployment" May 8 12:59:07.924: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 8 12:59:07.946: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 8 12:59:09.954: INFO: Waiting deployment "test-recreate-deployment" to complete May 8 12:59:09.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724539547, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724539547, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724539548, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724539547, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 12:59:11.959: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 8 12:59:11.966: INFO: Updating deployment test-recreate-deployment May 8 12:59:11.966: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 8 12:59:12.511: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4977,SelfLink:/apis/apps/v1/namespaces/deployment-4977/deployments/test-recreate-deployment,UID:887ff507-1231-4f5c-90c1-74d85b95150a,ResourceVersion:9706062,Generation:2,CreationTimestamp:2020-05-08 12:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-08 12:59:12 +0000 UTC 2020-05-08 12:59:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-08 12:59:12 +0000 UTC 2020-05-08 12:59:07 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 8 12:59:12.528: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4977,SelfLink:/apis/apps/v1/namespaces/deployment-4977/replicasets/test-recreate-deployment-5c8c9cc69d,UID:dcb28070-0e50-4096-8b84-8c0b1d2c07ee,ResourceVersion:9706060,Generation:1,CreationTimestamp:2020-05-08 12:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 887ff507-1231-4f5c-90c1-74d85b95150a 0xc00263c937 0xc00263c938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 12:59:12.528: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 8 12:59:12.528: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4977,SelfLink:/apis/apps/v1/namespaces/deployment-4977/replicasets/test-recreate-deployment-6df85df6b9,UID:34fbfa2a-3a6c-43b5-9107-fd8aee91ae46,ResourceVersion:9706051,Generation:2,CreationTimestamp:2020-05-08 12:59:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 887ff507-1231-4f5c-90c1-74d85b95150a 0xc00263ca07 0xc00263ca08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 12:59:12.532: INFO: Pod "test-recreate-deployment-5c8c9cc69d-ws5hq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-ws5hq,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4977,SelfLink:/api/v1/namespaces/deployment-4977/pods/test-recreate-deployment-5c8c9cc69d-ws5hq,UID:89ea76af-b233-4c4b-8798-a14fdc10f79a,ResourceVersion:9706063,Generation:0,CreationTimestamp:2020-05-08 12:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d dcb28070-0e50-4096-8b84-8c0b1d2c07ee 0xc00263d2d7 0xc00263d2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6xwg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6xwg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n6xwg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00263d350} {node.kubernetes.io/unreachable Exists NoExecute 0xc00263d370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 12:59:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 12:59:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 12:59:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 12:59:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-08 12:59:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 12:59:12.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4977" for this suite. May 8 12:59:18.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 12:59:18.729: INFO: namespace deployment-4977 deletion completed in 6.193496228s • [SLOW TEST:10.913 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 12:59:18.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:00:18.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3476" for this suite. May 8 13:00:40.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:00:40.926: INFO: namespace container-probe-3476 deletion completed in 22.096656749s • [SLOW TEST:82.197 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:00:40.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8f75403b-b920-46b1-b7dc-97030a5854d4 STEP: Creating a pod to test consume configMaps May 8 13:00:41.012: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07" in namespace "projected-1609" to be "success or failure" May 8 13:00:41.019: INFO: Pod "pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07": Phase="Pending", Reason="", readiness=false. Elapsed: 7.477972ms May 8 13:00:43.075: INFO: Pod "pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063739701s May 8 13:00:45.078: INFO: Pod "pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066801883s STEP: Saw pod success May 8 13:00:45.078: INFO: Pod "pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07" satisfied condition "success or failure" May 8 13:00:45.081: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07 container projected-configmap-volume-test: STEP: delete the pod May 8 13:00:45.122: INFO: Waiting for pod pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07 to disappear May 8 13:00:45.145: INFO: Pod pod-projected-configmaps-dfae96ad-56d6-44eb-9664-41232163af07 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:00:45.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1609" for this suite. May 8 13:00:51.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:00:51.263: INFO: namespace projected-1609 deletion completed in 6.114078739s • [SLOW TEST:10.337 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:00:51.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:00:56.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8354" for this suite. May 8 13:01:02.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:01:03.080: INFO: namespace watch-8354 deletion completed in 6.209150736s • [SLOW TEST:11.816 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:01:03.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 8 13:01:03.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 8 13:01:03.342: INFO: stderr: "" May 8 13:01:03.342: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:01:03.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8401" for this suite. May 8 13:01:09.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:01:09.443: INFO: namespace kubectl-8401 deletion completed in 6.097022439s • [SLOW TEST:6.363 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:01:09.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e59193ef-2f4c-4103-8c9e-9934758eaf3b STEP: Creating a pod to test consume secrets May 8 13:01:09.512: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd" in namespace "projected-2948" to be "success or failure" May 8 13:01:09.514: INFO: Pod "pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390082ms May 8 13:01:11.519: INFO: Pod "pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00683975s May 8 13:01:13.523: INFO: Pod "pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011225017s STEP: Saw pod success May 8 13:01:13.523: INFO: Pod "pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd" satisfied condition "success or failure" May 8 13:01:13.526: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd container projected-secret-volume-test: STEP: delete the pod May 8 13:01:13.641: INFO: Waiting for pod pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd to disappear May 8 13:01:13.671: INFO: Pod pod-projected-secrets-85f6d231-3db4-47ea-a02e-fff17b0698fd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:01:13.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2948" for this suite. May 8 13:01:19.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:01:19.804: INFO: namespace projected-2948 deletion completed in 6.12921282s • [SLOW TEST:10.360 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:01:19.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0508 13:01:31.651292 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:01:31.651: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:01:31.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8015" for this suite. May 8 13:01:39.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:01:39.749: INFO: namespace gc-8015 deletion completed in 8.094903951s • [SLOW TEST:19.945 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:01:39.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 13:01:50.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:01:50.525: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:01:52.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:01:52.530: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:01:54.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:01:54.530: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:01:56.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:01:56.529: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:01:58.525: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:01:58.529: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:02:00.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:02:00.530: INFO: Pod pod-with-prestop-http-hook still exists May 8 13:02:02.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 8 13:02:02.530: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:02:02.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1140" for this suite. May 8 13:02:24.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:02:24.632: INFO: namespace container-lifecycle-hook-1140 deletion completed in 22.088905199s • [SLOW TEST:44.882 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:02:24.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:02:24.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779" in namespace "downward-api-1883" to be "success or failure" May 8 13:02:24.735: INFO: Pod "downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779": Phase="Pending", Reason="", readiness=false. Elapsed: 20.234312ms May 8 13:02:26.740: INFO: Pod "downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024735563s May 8 13:02:28.745: INFO: Pod "downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030004137s STEP: Saw pod success May 8 13:02:28.745: INFO: Pod "downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779" satisfied condition "success or failure" May 8 13:02:28.748: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779 container client-container: STEP: delete the pod May 8 13:02:28.789: INFO: Waiting for pod downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779 to disappear May 8 13:02:28.795: INFO: Pod downwardapi-volume-dfe993af-0ec6-41fa-a0ec-c0f89256a779 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:02:28.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1883" for this suite. May 8 13:02:34.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:02:34.956: INFO: namespace downward-api-1883 deletion completed in 6.158358184s • [SLOW TEST:10.325 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:02:34.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:02:39.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1257" for this suite. May 8 13:02:45.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:02:45.257: INFO: namespace kubelet-test-1257 deletion completed in 6.115364949s • [SLOW TEST:10.300 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:02:45.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 13:02:49.413: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:02:49.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9367" for this suite. May 8 13:02:55.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:02:55.680: INFO: namespace container-runtime-9367 deletion completed in 6.113568214s • [SLOW TEST:10.423 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:02:55.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:02:55.762: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:02:59.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5855" for this suite. May 8 13:03:45.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:03:45.910: INFO: namespace pods-5855 deletion completed in 46.106787947s • [SLOW TEST:50.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:03:45.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 8 13:03:45.984: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:04:02.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1191" for this suite. May 8 13:04:08.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:04:08.273: INFO: namespace pods-1191 deletion completed in 6.097049127s • [SLOW TEST:22.362 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:04:08.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6d07b0f0-d54d-433e-95f9-114eb02d47e1 STEP: Creating a pod to test consume secrets May 8 13:04:08.392: INFO: Waiting up to 5m0s for pod "pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36" in namespace "secrets-9484" to be "success or failure" May 8 13:04:08.402: INFO: Pod "pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36": Phase="Pending", Reason="", readiness=false. Elapsed: 9.786371ms May 8 13:04:10.407: INFO: Pod "pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014447284s May 8 13:04:12.412: INFO: Pod "pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01899089s STEP: Saw pod success May 8 13:04:12.412: INFO: Pod "pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36" satisfied condition "success or failure" May 8 13:04:12.415: INFO: Trying to get logs from node iruya-worker pod pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36 container secret-volume-test: STEP: delete the pod May 8 13:04:12.440: INFO: Waiting for pod pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36 to disappear May 8 13:04:12.444: INFO: Pod pod-secrets-37f7120b-628e-41e1-b29d-9fe59a432c36 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:04:12.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9484" for this suite. May 8 13:04:18.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:04:18.535: INFO: namespace secrets-9484 deletion completed in 6.087197107s • [SLOW TEST:10.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:04:18.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9061 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9061 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9061 May 8 13:04:18.714: INFO: Found 0 stateful pods, waiting for 1 May 8 13:04:28.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 8 13:04:28.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:04:28.980: INFO: stderr: "I0508 13:04:28.848039 167 log.go:172] (0xc000a0a630) (0xc0003feb40) Create stream\nI0508 13:04:28.848091 167 log.go:172] (0xc000a0a630) (0xc0003feb40) Stream added, broadcasting: 1\nI0508 13:04:28.850446 167 log.go:172] (0xc000a0a630) Reply frame received for 1\nI0508 13:04:28.850504 167 log.go:172] (0xc000a0a630) (0xc0008ee000) Create stream\nI0508 13:04:28.850530 167 log.go:172] (0xc000a0a630) (0xc0008ee000) Stream added, broadcasting: 3\nI0508 13:04:28.851685 167 log.go:172] (0xc000a0a630) Reply frame received for 3\nI0508 13:04:28.851760 167 log.go:172] (0xc000a0a630) (0xc0008ee0a0) Create stream\nI0508 13:04:28.851785 167 log.go:172] (0xc000a0a630) (0xc0008ee0a0) Stream added, broadcasting: 5\nI0508 13:04:28.853679 167 log.go:172] (0xc000a0a630) Reply frame received for 5\nI0508 13:04:28.938878 167 log.go:172] (0xc000a0a630) Data frame received for 5\nI0508 13:04:28.938925 167 log.go:172] (0xc0008ee0a0) (5) Data frame handling\nI0508 13:04:28.938948 167 log.go:172] (0xc0008ee0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:04:28.973287 167 log.go:172] (0xc000a0a630) Data frame received for 3\nI0508 13:04:28.973311 167 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0508 13:04:28.973322 167 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0508 13:04:28.973645 167 log.go:172] (0xc000a0a630) Data frame received for 5\nI0508 13:04:28.973659 167 log.go:172] (0xc0008ee0a0) (5) Data frame handling\nI0508 13:04:28.973673 167 log.go:172] (0xc000a0a630) Data frame received for 3\nI0508 13:04:28.973678 167 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0508 13:04:28.975792 167 log.go:172] (0xc000a0a630) Data frame received for 1\nI0508 13:04:28.975821 167 log.go:172] (0xc0003feb40) (1) Data frame handling\nI0508 13:04:28.975840 167 log.go:172] (0xc0003feb40) (1) Data frame sent\nI0508 13:04:28.975929 167 log.go:172] (0xc000a0a630) (0xc0003feb40) Stream removed, broadcasting: 1\nI0508 13:04:28.976132 167 log.go:172] (0xc000a0a630) Go away received\nI0508 13:04:28.976355 167 log.go:172] (0xc000a0a630) (0xc0003feb40) Stream removed, broadcasting: 1\nI0508 13:04:28.976374 167 log.go:172] (0xc000a0a630) (0xc0008ee000) Stream removed, broadcasting: 3\nI0508 13:04:28.976385 167 log.go:172] (0xc000a0a630) (0xc0008ee0a0) Stream removed, broadcasting: 5\n" May 8 13:04:28.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:04:28.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:04:28.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 13:04:38.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 13:04:38.988: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:04:39.026: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999342s May 8 13:04:40.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.969466434s May 8 13:04:41.036: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964491908s May 8 13:04:42.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.95986076s May 8 13:04:43.047: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.952410672s May 8 13:04:44.051: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948754079s May 8 13:04:45.056: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.944033536s May 8 13:04:46.060: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.939594059s May 8 13:04:47.065: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.935249204s May 8 13:04:48.068: INFO: Verifying statefulset ss doesn't scale past 1 for another 930.504593ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9061 May 8 13:04:49.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:04:49.312: INFO: stderr: "I0508 13:04:49.207703 190 log.go:172] (0xc00013edc0) (0xc000942780) Create stream\nI0508 13:04:49.207765 190 log.go:172] (0xc00013edc0) (0xc000942780) Stream added, broadcasting: 1\nI0508 13:04:49.210221 190 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0508 13:04:49.210276 190 log.go:172] (0xc00013edc0) (0xc0008e6000) Create stream\nI0508 13:04:49.210307 190 log.go:172] (0xc00013edc0) (0xc0008e6000) Stream added, broadcasting: 3\nI0508 13:04:49.211487 190 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0508 13:04:49.211543 190 log.go:172] (0xc00013edc0) (0xc000942820) Create stream\nI0508 13:04:49.211560 190 log.go:172] (0xc00013edc0) (0xc000942820) Stream added, broadcasting: 5\nI0508 13:04:49.212747 190 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0508 13:04:49.307389 190 log.go:172] (0xc00013edc0) Data frame received for 3\nI0508 13:04:49.307454 190 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0508 13:04:49.307481 190 log.go:172] (0xc0008e6000) (3) Data frame sent\nI0508 13:04:49.307500 190 log.go:172] (0xc00013edc0) Data frame received for 3\nI0508 13:04:49.307523 190 log.go:172] (0xc00013edc0) Data frame received for 5\nI0508 13:04:49.307554 190 log.go:172] (0xc000942820) (5) Data frame handling\nI0508 13:04:49.307569 190 log.go:172] (0xc000942820) (5) Data frame sent\nI0508 13:04:49.307580 190 log.go:172] (0xc00013edc0) Data frame received for 5\nI0508 13:04:49.307591 190 log.go:172] (0xc000942820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:04:49.307619 190 log.go:172] (0xc0008e6000) (3) Data frame handling\nI0508 13:04:49.308698 190 log.go:172] (0xc00013edc0) Data frame received for 1\nI0508 13:04:49.308714 190 log.go:172] (0xc000942780) (1) Data frame handling\nI0508 13:04:49.308732 190 log.go:172] (0xc000942780) (1) Data frame sent\nI0508 13:04:49.308741 190 log.go:172] (0xc00013edc0) (0xc000942780) Stream removed, broadcasting: 1\nI0508 13:04:49.308879 190 log.go:172] (0xc00013edc0) Go away received\nI0508 13:04:49.309027 190 log.go:172] (0xc00013edc0) (0xc000942780) Stream removed, broadcasting: 1\nI0508 13:04:49.309042 190 log.go:172] (0xc00013edc0) (0xc0008e6000) Stream removed, broadcasting: 3\nI0508 13:04:49.309049 190 log.go:172] (0xc00013edc0) (0xc000942820) Stream removed, broadcasting: 5\n" May 8 13:04:49.312: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:04:49.313: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:04:49.316: INFO: Found 1 stateful pods, waiting for 3 May 8 13:04:59.322: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:04:59.322: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:04:59.322: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 8 13:04:59.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:04:59.559: INFO: stderr: "I0508 13:04:59.457902 210 log.go:172] (0xc000912420) (0xc000a0e820) Create stream\nI0508 13:04:59.457955 210 log.go:172] (0xc000912420) (0xc000a0e820) Stream added, broadcasting: 1\nI0508 13:04:59.460671 210 log.go:172] (0xc000912420) Reply frame received for 1\nI0508 13:04:59.460708 210 log.go:172] (0xc000912420) (0xc000a0e8c0) Create stream\nI0508 13:04:59.460718 210 log.go:172] (0xc000912420) (0xc000a0e8c0) Stream added, broadcasting: 3\nI0508 13:04:59.461895 210 log.go:172] (0xc000912420) Reply frame received for 3\nI0508 13:04:59.461924 210 log.go:172] (0xc000912420) (0xc000311a40) Create stream\nI0508 13:04:59.461944 210 log.go:172] (0xc000912420) (0xc000311a40) Stream added, broadcasting: 5\nI0508 13:04:59.462719 210 log.go:172] (0xc000912420) Reply frame received for 5\nI0508 13:04:59.551866 210 log.go:172] (0xc000912420) Data frame received for 5\nI0508 13:04:59.551886 210 log.go:172] (0xc000311a40) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:04:59.551907 210 log.go:172] (0xc000912420) Data frame received for 3\nI0508 13:04:59.551946 210 log.go:172] (0xc000a0e8c0) (3) Data frame handling\nI0508 13:04:59.551977 210 log.go:172] (0xc000a0e8c0) (3) Data frame sent\nI0508 13:04:59.552003 210 log.go:172] (0xc000912420) Data frame received for 3\nI0508 13:04:59.552028 210 log.go:172] (0xc000a0e8c0) (3) Data frame handling\nI0508 13:04:59.552098 210 log.go:172] (0xc000311a40) (5) Data frame sent\nI0508 13:04:59.552145 210 log.go:172] (0xc000912420) Data frame received for 5\nI0508 13:04:59.552169 210 log.go:172] (0xc000311a40) (5) Data frame handling\nI0508 13:04:59.553679 210 log.go:172] (0xc000912420) Data frame received for 1\nI0508 13:04:59.553711 210 log.go:172] (0xc000a0e820) (1) Data frame handling\nI0508 13:04:59.553732 210 log.go:172] (0xc000a0e820) (1) Data frame sent\nI0508 13:04:59.553749 210 log.go:172] (0xc000912420) (0xc000a0e820) Stream removed, broadcasting: 1\nI0508 13:04:59.553963 210 log.go:172] (0xc000912420) Go away received\nI0508 13:04:59.554219 210 log.go:172] (0xc000912420) (0xc000a0e820) Stream removed, broadcasting: 1\nI0508 13:04:59.554240 210 log.go:172] (0xc000912420) (0xc000a0e8c0) Stream removed, broadcasting: 3\nI0508 13:04:59.554250 210 log.go:172] (0xc000912420) (0xc000311a40) Stream removed, broadcasting: 5\n" May 8 13:04:59.559: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:04:59.559: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:04:59.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:04:59.799: INFO: stderr: "I0508 13:04:59.702100 230 log.go:172] (0xc000a1e420) (0xc000616820) Create stream\nI0508 13:04:59.702157 230 log.go:172] (0xc000a1e420) (0xc000616820) Stream added, broadcasting: 1\nI0508 13:04:59.705608 230 log.go:172] (0xc000a1e420) Reply frame received for 1\nI0508 13:04:59.705652 230 log.go:172] (0xc000a1e420) (0xc000616000) Create stream\nI0508 13:04:59.705663 230 log.go:172] (0xc000a1e420) (0xc000616000) Stream added, broadcasting: 3\nI0508 13:04:59.706606 230 log.go:172] (0xc000a1e420) Reply frame received for 3\nI0508 13:04:59.706639 230 log.go:172] (0xc000a1e420) (0xc000616140) Create stream\nI0508 13:04:59.706648 230 log.go:172] (0xc000a1e420) (0xc000616140) Stream added, broadcasting: 5\nI0508 13:04:59.707494 230 log.go:172] (0xc000a1e420) Reply frame received for 5\nI0508 13:04:59.753563 230 log.go:172] (0xc000a1e420) Data frame received for 5\nI0508 13:04:59.753601 230 log.go:172] (0xc000616140) (5) Data frame handling\nI0508 13:04:59.753623 230 log.go:172] (0xc000616140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:04:59.790378 230 log.go:172] (0xc000a1e420) Data frame received for 3\nI0508 13:04:59.790411 230 log.go:172] (0xc000616000) (3) Data frame handling\nI0508 13:04:59.790427 230 log.go:172] (0xc000616000) (3) Data frame sent\nI0508 13:04:59.790443 230 log.go:172] (0xc000a1e420) Data frame received for 3\nI0508 13:04:59.790458 230 log.go:172] (0xc000616000) (3) Data frame handling\nI0508 13:04:59.790700 230 log.go:172] (0xc000a1e420) Data frame received for 5\nI0508 13:04:59.790727 230 log.go:172] (0xc000616140) (5) Data frame handling\nI0508 13:04:59.793972 230 log.go:172] (0xc000a1e420) Data frame received for 1\nI0508 13:04:59.794036 230 log.go:172] (0xc000616820) (1) Data frame handling\nI0508 13:04:59.794067 230 log.go:172] (0xc000616820) (1) Data frame sent\nI0508 13:04:59.794092 230 log.go:172] (0xc000a1e420) (0xc000616820) Stream removed, broadcasting: 1\nI0508 13:04:59.794109 230 log.go:172] (0xc000a1e420) Go away received\nI0508 13:04:59.794590 230 log.go:172] (0xc000a1e420) (0xc000616820) Stream removed, broadcasting: 1\nI0508 13:04:59.794611 230 log.go:172] (0xc000a1e420) (0xc000616000) Stream removed, broadcasting: 3\nI0508 13:04:59.794631 230 log.go:172] (0xc000a1e420) (0xc000616140) Stream removed, broadcasting: 5\n" May 8 13:04:59.799: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:04:59.799: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:04:59.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:05:00.014: INFO: stderr: "I0508 13:04:59.920633 252 log.go:172] (0xc000116790) (0xc0005d4780) Create stream\nI0508 13:04:59.920675 252 log.go:172] (0xc000116790) (0xc0005d4780) Stream added, broadcasting: 1\nI0508 13:04:59.922886 252 log.go:172] (0xc000116790) Reply frame received for 1\nI0508 13:04:59.922909 252 log.go:172] (0xc000116790) (0xc0005a4000) Create stream\nI0508 13:04:59.922916 252 log.go:172] (0xc000116790) (0xc0005a4000) Stream added, broadcasting: 3\nI0508 13:04:59.923555 252 log.go:172] (0xc000116790) Reply frame received for 3\nI0508 13:04:59.923590 252 log.go:172] (0xc000116790) (0xc00039c000) Create stream\nI0508 13:04:59.923601 252 log.go:172] (0xc000116790) (0xc00039c000) Stream added, broadcasting: 5\nI0508 13:04:59.924261 252 log.go:172] (0xc000116790) Reply frame received for 5\nI0508 13:04:59.986305 252 log.go:172] (0xc000116790) Data frame received for 5\nI0508 13:04:59.986334 252 log.go:172] (0xc00039c000) (5) Data frame handling\nI0508 13:04:59.986353 252 log.go:172] (0xc00039c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:05:00.008441 252 log.go:172] (0xc000116790) Data frame received for 5\nI0508 13:05:00.008470 252 log.go:172] (0xc00039c000) (5) Data frame handling\nI0508 13:05:00.008505 252 log.go:172] (0xc000116790) Data frame received for 3\nI0508 13:05:00.008524 252 log.go:172] (0xc0005a4000) (3) Data frame handling\nI0508 13:05:00.008544 252 log.go:172] (0xc0005a4000) (3) Data frame sent\nI0508 13:05:00.008556 252 log.go:172] (0xc000116790) Data frame received for 3\nI0508 13:05:00.008570 252 log.go:172] (0xc0005a4000) (3) Data frame handling\nI0508 13:05:00.010333 252 log.go:172] (0xc000116790) Data frame received for 1\nI0508 13:05:00.010364 252 log.go:172] (0xc0005d4780) (1) Data frame handling\nI0508 13:05:00.010377 252 log.go:172] (0xc0005d4780) (1) Data frame sent\nI0508 13:05:00.010391 252 log.go:172] (0xc000116790) (0xc0005d4780) Stream removed, broadcasting: 1\nI0508 13:05:00.010415 252 log.go:172] (0xc000116790) Go away received\nI0508 13:05:00.010708 252 log.go:172] (0xc000116790) (0xc0005d4780) Stream removed, broadcasting: 1\nI0508 13:05:00.010723 252 log.go:172] (0xc000116790) (0xc0005a4000) Stream removed, broadcasting: 3\nI0508 13:05:00.010730 252 log.go:172] (0xc000116790) (0xc00039c000) Stream removed, broadcasting: 5\n" May 8 13:05:00.014: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:05:00.014: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:05:00.014: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:05:00.036: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 8 13:05:10.044: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 13:05:10.044: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 13:05:10.044: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 13:05:10.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999595s May 8 13:05:11.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988082041s May 8 13:05:12.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982642489s May 8 13:05:13.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978950675s May 8 13:05:14.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973646849s May 8 13:05:15.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969037763s May 8 13:05:16.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963855567s May 8 13:05:17.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957489756s May 8 13:05:18.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.927476065s May 8 13:05:19.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 922.655545ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9061 May 8 13:05:20.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:05:20.373: INFO: stderr: "I0508 13:05:20.275169 270 log.go:172] (0xc000a5c790) (0xc0008da960) Create stream\nI0508 13:05:20.275224 270 log.go:172] (0xc000a5c790) (0xc0008da960) Stream added, broadcasting: 1\nI0508 13:05:20.278757 270 log.go:172] (0xc000a5c790) Reply frame received for 1\nI0508 13:05:20.278799 270 log.go:172] (0xc000a5c790) (0xc0008da000) Create stream\nI0508 13:05:20.278809 270 log.go:172] (0xc000a5c790) (0xc0008da000) Stream added, broadcasting: 3\nI0508 13:05:20.279842 270 log.go:172] (0xc000a5c790) Reply frame received for 3\nI0508 13:05:20.279887 270 log.go:172] (0xc000a5c790) (0xc0008da0a0) Create stream\nI0508 13:05:20.279908 270 log.go:172] (0xc000a5c790) (0xc0008da0a0) Stream added, broadcasting: 5\nI0508 13:05:20.280880 270 log.go:172] (0xc000a5c790) Reply frame received for 5\nI0508 13:05:20.365830 270 log.go:172] (0xc000a5c790) Data frame received for 5\nI0508 13:05:20.365895 270 log.go:172] (0xc0008da0a0) (5) Data frame handling\nI0508 13:05:20.365918 270 log.go:172] (0xc0008da0a0) (5) Data frame sent\nI0508 13:05:20.365935 270 log.go:172] (0xc000a5c790) Data frame received for 5\nI0508 13:05:20.365946 270 log.go:172] (0xc0008da0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:05:20.366011 270 log.go:172] (0xc000a5c790) Data frame received for 3\nI0508 13:05:20.366063 270 log.go:172] (0xc0008da000) (3) Data frame handling\nI0508 13:05:20.366089 270 log.go:172] (0xc0008da000) (3) Data frame sent\nI0508 13:05:20.366126 270 log.go:172] (0xc000a5c790) Data frame received for 3\nI0508 13:05:20.366168 270 log.go:172] (0xc0008da000) (3) Data frame handling\nI0508 13:05:20.367215 270 log.go:172] (0xc000a5c790) Data frame received for 1\nI0508 13:05:20.367242 270 log.go:172] (0xc0008da960) (1) Data frame handling\nI0508 13:05:20.367260 270 log.go:172] (0xc0008da960) (1) Data frame sent\nI0508 13:05:20.367277 270 log.go:172] (0xc000a5c790) (0xc0008da960) Stream removed, broadcasting: 1\nI0508 13:05:20.367293 270 log.go:172] (0xc000a5c790) Go away received\nI0508 13:05:20.367692 270 log.go:172] (0xc000a5c790) (0xc0008da960) Stream removed, broadcasting: 1\nI0508 13:05:20.367718 270 log.go:172] (0xc000a5c790) (0xc0008da000) Stream removed, broadcasting: 3\nI0508 13:05:20.367731 270 log.go:172] (0xc000a5c790) (0xc0008da0a0) Stream removed, broadcasting: 5\n" May 8 13:05:20.373: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:05:20.373: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:05:20.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:05:20.590: INFO: stderr: "I0508 13:05:20.498179 290 log.go:172] (0xc0009b4420) (0xc00044c6e0) Create stream\nI0508 13:05:20.498238 290 log.go:172] (0xc0009b4420) (0xc00044c6e0) Stream added, broadcasting: 1\nI0508 13:05:20.502337 290 log.go:172] (0xc0009b4420) Reply frame received for 1\nI0508 13:05:20.502426 290 log.go:172] (0xc0009b4420) (0xc0003120a0) Create stream\nI0508 13:05:20.502447 290 log.go:172] (0xc0009b4420) (0xc0003120a0) Stream added, broadcasting: 3\nI0508 13:05:20.503474 290 log.go:172] (0xc0009b4420) Reply frame received for 3\nI0508 13:05:20.503527 290 log.go:172] (0xc0009b4420) (0xc000312140) Create stream\nI0508 13:05:20.503550 290 log.go:172] (0xc0009b4420) (0xc000312140) Stream added, broadcasting: 5\nI0508 13:05:20.504363 290 log.go:172] (0xc0009b4420) Reply frame received for 5\nI0508 13:05:20.578028 290 log.go:172] (0xc0009b4420) Data frame received for 5\nI0508 13:05:20.578087 290 log.go:172] (0xc000312140) (5) Data frame handling\nI0508 13:05:20.578107 290 log.go:172] (0xc000312140) (5) Data frame sent\nI0508 13:05:20.578120 290 log.go:172] (0xc0009b4420) Data frame received for 5\nI0508 13:05:20.578131 290 log.go:172] (0xc000312140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:05:20.578169 290 log.go:172] (0xc0009b4420) Data frame received for 3\nI0508 13:05:20.578195 290 log.go:172] (0xc0003120a0) (3) Data frame handling\nI0508 13:05:20.578225 290 log.go:172] (0xc0003120a0) (3) Data frame sent\nI0508 13:05:20.578241 290 log.go:172] (0xc0009b4420) Data frame received for 3\nI0508 13:05:20.578255 290 log.go:172] (0xc0003120a0) (3) Data frame handling\nI0508 13:05:20.584735 290 log.go:172] (0xc0009b4420) Data frame received for 1\nI0508 13:05:20.584773 290 log.go:172] (0xc00044c6e0) (1) Data frame handling\nI0508 13:05:20.584783 290 log.go:172] (0xc00044c6e0) (1) Data frame sent\nI0508 13:05:20.584794 290 log.go:172] (0xc0009b4420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0508 13:05:20.584833 290 log.go:172] (0xc0009b4420) Go away received\nI0508 13:05:20.585084 290 log.go:172] (0xc0009b4420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0508 13:05:20.585101 290 log.go:172] (0xc0009b4420) (0xc0003120a0) Stream removed, broadcasting: 3\nI0508 13:05:20.585258 290 log.go:172] (0xc0009b4420) (0xc000312140) Stream removed, broadcasting: 5\n" May 8 13:05:20.590: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:05:20.590: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:05:20.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9061 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:05:20.804: INFO: stderr: "I0508 13:05:20.722871 310 log.go:172] (0xc000a4a0b0) (0xc00098e1e0) Create stream\nI0508 13:05:20.722940 310 log.go:172] (0xc000a4a0b0) (0xc00098e1e0) Stream added, broadcasting: 1\nI0508 13:05:20.725046 310 log.go:172] (0xc000a4a0b0) Reply frame received for 1\nI0508 13:05:20.725077 310 log.go:172] (0xc000a4a0b0) (0xc000762280) Create stream\nI0508 13:05:20.725088 310 log.go:172] (0xc000a4a0b0) (0xc000762280) Stream added, broadcasting: 3\nI0508 13:05:20.726001 310 log.go:172] (0xc000a4a0b0) Reply frame received for 3\nI0508 13:05:20.726048 310 log.go:172] (0xc000a4a0b0) (0xc000762320) Create stream\nI0508 13:05:20.726063 310 log.go:172] (0xc000a4a0b0) (0xc000762320) Stream added, broadcasting: 5\nI0508 13:05:20.726829 310 log.go:172] (0xc000a4a0b0) Reply frame received for 5\nI0508 13:05:20.797538 310 log.go:172] (0xc000a4a0b0) Data frame received for 5\nI0508 13:05:20.797652 310 log.go:172] (0xc000762320) (5) Data frame handling\nI0508 13:05:20.797673 310 log.go:172] (0xc000762320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:05:20.797691 310 log.go:172] (0xc000a4a0b0) Data frame received for 3\nI0508 13:05:20.797699 310 log.go:172] (0xc000762280) (3) Data frame handling\nI0508 13:05:20.797715 310 log.go:172] (0xc000762280) (3) Data frame sent\nI0508 13:05:20.797833 310 log.go:172] (0xc000a4a0b0) Data frame received for 3\nI0508 13:05:20.797872 310 log.go:172] (0xc000762280) (3) Data frame handling\nI0508 13:05:20.797895 310 log.go:172] (0xc000a4a0b0) Data frame received for 5\nI0508 13:05:20.797906 310 log.go:172] (0xc000762320) (5) Data frame handling\nI0508 13:05:20.799312 310 log.go:172] (0xc000a4a0b0) Data frame received for 1\nI0508 13:05:20.799329 310 log.go:172] (0xc00098e1e0) (1) Data frame handling\nI0508 13:05:20.799339 310 log.go:172] (0xc00098e1e0) (1) Data frame sent\nI0508 13:05:20.799355 310 log.go:172] (0xc000a4a0b0) (0xc00098e1e0) Stream removed, broadcasting: 1\nI0508 13:05:20.799385 310 log.go:172] (0xc000a4a0b0) Go away received\nI0508 13:05:20.799799 310 log.go:172] (0xc000a4a0b0) (0xc00098e1e0) Stream removed, broadcasting: 1\nI0508 13:05:20.799817 310 log.go:172] (0xc000a4a0b0) (0xc000762280) Stream removed, broadcasting: 3\nI0508 13:05:20.799825 310 log.go:172] (0xc000a4a0b0) (0xc000762320) Stream removed, broadcasting: 5\n" May 8 13:05:20.804: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:05:20.804: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:05:20.804: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 8 13:06:00.884: INFO: Deleting all statefulset in ns statefulset-9061 May 8 13:06:00.887: INFO: Scaling statefulset ss to 0 May 8 13:06:00.894: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:06:00.896: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:06:00.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9061" for this suite. May 8 13:06:06.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:06:07.045: INFO: namespace statefulset-9061 deletion completed in 6.128946719s • [SLOW TEST:108.509 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:06:07.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 8 13:06:07.163: INFO: Waiting up to 5m0s for pod "pod-42987a56-736a-4f26-a0a1-c7294b0b8de4" in namespace "emptydir-8725" to be "success or failure" May 8 13:06:07.171: INFO: Pod "pod-42987a56-736a-4f26-a0a1-c7294b0b8de4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.304185ms May 8 13:06:09.218: INFO: Pod "pod-42987a56-736a-4f26-a0a1-c7294b0b8de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055175805s May 8 13:06:11.221: INFO: Pod "pod-42987a56-736a-4f26-a0a1-c7294b0b8de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058658082s STEP: Saw pod success May 8 13:06:11.221: INFO: Pod "pod-42987a56-736a-4f26-a0a1-c7294b0b8de4" satisfied condition "success or failure" May 8 13:06:11.224: INFO: Trying to get logs from node iruya-worker pod pod-42987a56-736a-4f26-a0a1-c7294b0b8de4 container test-container: STEP: delete the pod May 8 13:06:11.514: INFO: Waiting for pod pod-42987a56-736a-4f26-a0a1-c7294b0b8de4 to disappear May 8 13:06:11.542: INFO: Pod pod-42987a56-736a-4f26-a0a1-c7294b0b8de4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:06:11.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8725" for this suite. May 8 13:06:17.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:06:17.636: INFO: namespace emptydir-8725 deletion completed in 6.090198061s • [SLOW TEST:10.591 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:06:17.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:06:17.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152' May 8 13:06:18.081: INFO: stderr: "" May 8 13:06:18.081: INFO: stdout: "replicationcontroller/redis-master created\n" May 8 13:06:18.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4152' May 8 13:06:18.401: INFO: stderr: "" May 8 13:06:18.401: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 8 13:06:19.406: INFO: Selector matched 1 pods for map[app:redis] May 8 13:06:19.406: INFO: Found 0 / 1 May 8 13:06:20.410: INFO: Selector matched 1 pods for map[app:redis] May 8 13:06:20.410: INFO: Found 0 / 1 May 8 13:06:21.405: INFO: Selector matched 1 pods for map[app:redis] May 8 13:06:21.406: INFO: Found 1 / 1 May 8 13:06:21.406: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 13:06:21.408: INFO: Selector matched 1 pods for map[app:redis] May 8 13:06:21.408: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 13:06:21.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-j99rx --namespace=kubectl-4152' May 8 13:06:23.892: INFO: stderr: "" May 8 13:06:23.892: INFO: stdout: "Name: redis-master-j99rx\nNamespace: kubectl-4152\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Fri, 08 May 2020 13:06:18 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.226\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://dbdca135390f6fa6201ac7f661bd39994e7f5c22765bd1a276c2107274ce7611\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 08 May 2020 13:06:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-l9m2m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-l9m2m:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-l9m2m\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-4152/redis-master-j99rx to iruya-worker2\n Normal Pulled 4s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-worker2 Created container redis-master\n Normal Started 3s kubelet, iruya-worker2 Started container redis-master\n" May 8 13:06:23.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4152' May 8 13:06:24.013: INFO: stderr: "" May 8 13:06:24.013: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4152\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-j99rx\n" May 8 13:06:24.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4152' May 8 13:06:24.116: INFO: stderr: "" May 8 13:06:24.116: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4152\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.108.19.102\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.226:6379\nSession Affinity: None\nEvents: \n" May 8 13:06:24.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 8 13:06:24.257: INFO: stderr: "" May 8 13:06:24.257: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 08 May 2020 13:05:27 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 08 May 2020 13:05:27 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 08 May 2020 13:05:27 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 08 May 2020 13:05:27 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 53d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 53d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 8 13:06:24.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4152' May 8 13:06:24.359: INFO: stderr: "" May 8 13:06:24.359: INFO: stdout: "Name: kubectl-4152\nLabels: e2e-framework=kubectl\n e2e-run=a8795f82-b82d-40be-9f6f-159a104a1952\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:06:24.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4152" for this suite. May 8 13:06:46.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:06:46.455: INFO: namespace kubectl-4152 deletion completed in 22.092561831s • [SLOW TEST:28.818 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:06:46.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:06:46.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5215" for this suite. May 8 13:06:52.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:06:52.839: INFO: namespace kubelet-test-5215 deletion completed in 6.12475283s • [SLOW TEST:6.383 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:06:52.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 13:06:57.444: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1f0c2471-9b9e-4c8b-a7ef-9b1f35de83b6" May 8 13:06:57.444: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1f0c2471-9b9e-4c8b-a7ef-9b1f35de83b6" in namespace "pods-90" to be "terminated due to deadline exceeded" May 8 13:06:57.466: INFO: Pod "pod-update-activedeadlineseconds-1f0c2471-9b9e-4c8b-a7ef-9b1f35de83b6": Phase="Running", Reason="", readiness=true. Elapsed: 21.583101ms May 8 13:06:59.470: INFO: Pod "pod-update-activedeadlineseconds-1f0c2471-9b9e-4c8b-a7ef-9b1f35de83b6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025724893s May 8 13:06:59.470: INFO: Pod "pod-update-activedeadlineseconds-1f0c2471-9b9e-4c8b-a7ef-9b1f35de83b6" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:06:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-90" for this suite. May 8 13:07:05.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:07:05.594: INFO: namespace pods-90 deletion completed in 6.120429431s • [SLOW TEST:12.755 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:07:05.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 13:07:09.715: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:07:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1350" for this suite. May 8 13:07:15.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:07:15.852: INFO: namespace container-runtime-1350 deletion completed in 6.094367178s • [SLOW TEST:10.258 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:07:15.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 8 13:07:22.823: INFO: 10 pods remaining May 8 13:07:22.823: INFO: 0 pods has nil DeletionTimestamp May 8 13:07:22.823: INFO: May 8 13:07:23.250: INFO: 0 pods remaining May 8 13:07:23.250: INFO: 0 pods has nil DeletionTimestamp May 8 13:07:23.250: INFO: May 8 13:07:24.509: INFO: 0 pods remaining May 8 13:07:24.509: INFO: 0 pods has nil DeletionTimestamp May 8 13:07:24.509: INFO: STEP: Gathering metrics W0508 13:07:25.276730 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:07:25.276: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:07:25.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9773" for this suite. May 8 13:07:31.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:07:31.859: INFO: namespace gc-9773 deletion completed in 6.547121272s • [SLOW TEST:16.007 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:07:31.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:07:58.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1864" for this suite. May 8 13:08:04.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:08:04.186: INFO: namespace namespaces-1864 deletion completed in 6.086865429s STEP: Destroying namespace "nsdeletetest-150" for this suite. May 8 13:08:04.189: INFO: Namespace nsdeletetest-150 was already deleted STEP: Destroying namespace "nsdeletetest-8775" for this suite. May 8 13:08:10.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:08:10.287: INFO: namespace nsdeletetest-8775 deletion completed in 6.097850162s • [SLOW TEST:38.428 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:08:10.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-fb66a580-4b37-4e89-8939-7a3408713397 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:08:10.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8723" for this suite. May 8 13:08:16.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:08:16.427: INFO: namespace configmap-8723 deletion completed in 6.10288691s • [SLOW TEST:6.139 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:08:16.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0508 13:08:57.414469 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:08:57.414: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:08:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2145" for this suite. May 8 13:09:07.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:09:07.512: INFO: namespace gc-2145 deletion completed in 10.095683311s • [SLOW TEST:51.085 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:09:07.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 8 13:09:07.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6873' May 8 13:09:07.830: INFO: stderr: "" May 8 13:09:07.830: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 13:09:07.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6873' May 8 13:09:07.940: INFO: stderr: "" May 8 13:09:07.940: INFO: stdout: "update-demo-nautilus-lwckx update-demo-nautilus-n8t4d " May 8 13:09:07.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwckx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:08.022: INFO: stderr: "" May 8 13:09:08.022: INFO: stdout: "" May 8 13:09:08.022: INFO: update-demo-nautilus-lwckx is created but not running May 8 13:09:13.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6873' May 8 13:09:13.133: INFO: stderr: "" May 8 13:09:13.133: INFO: stdout: "update-demo-nautilus-lwckx update-demo-nautilus-n8t4d " May 8 13:09:13.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwckx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:13.233: INFO: stderr: "" May 8 13:09:13.233: INFO: stdout: "true" May 8 13:09:13.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwckx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:13.324: INFO: stderr: "" May 8 13:09:13.324: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:09:13.324: INFO: validating pod update-demo-nautilus-lwckx May 8 13:09:13.343: INFO: got data: { "image": "nautilus.jpg" } May 8 13:09:13.343: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:09:13.343: INFO: update-demo-nautilus-lwckx is verified up and running May 8 13:09:13.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n8t4d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:13.434: INFO: stderr: "" May 8 13:09:13.434: INFO: stdout: "true" May 8 13:09:13.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n8t4d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:13.522: INFO: stderr: "" May 8 13:09:13.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:09:13.522: INFO: validating pod update-demo-nautilus-n8t4d May 8 13:09:13.536: INFO: got data: { "image": "nautilus.jpg" } May 8 13:09:13.536: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:09:13.536: INFO: update-demo-nautilus-n8t4d is verified up and running STEP: rolling-update to new replication controller May 8 13:09:13.538: INFO: scanned /root for discovery docs: May 8 13:09:13.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6873' May 8 13:09:36.893: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 13:09:36.893: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 13:09:36.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6873' May 8 13:09:37.002: INFO: stderr: "" May 8 13:09:37.002: INFO: stdout: "update-demo-kitten-ndx6k update-demo-kitten-nmtvr " May 8 13:09:37.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ndx6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:37.083: INFO: stderr: "" May 8 13:09:37.083: INFO: stdout: "true" May 8 13:09:37.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ndx6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:37.163: INFO: stderr: "" May 8 13:09:37.163: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 13:09:37.163: INFO: validating pod update-demo-kitten-ndx6k May 8 13:09:37.168: INFO: got data: { "image": "kitten.jpg" } May 8 13:09:37.168: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 13:09:37.169: INFO: update-demo-kitten-ndx6k is verified up and running May 8 13:09:37.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nmtvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:37.276: INFO: stderr: "" May 8 13:09:37.276: INFO: stdout: "true" May 8 13:09:37.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nmtvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6873' May 8 13:09:37.366: INFO: stderr: "" May 8 13:09:37.366: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 8 13:09:37.366: INFO: validating pod update-demo-kitten-nmtvr May 8 13:09:37.370: INFO: got data: { "image": "kitten.jpg" } May 8 13:09:37.370: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 8 13:09:37.370: INFO: update-demo-kitten-nmtvr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:09:37.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6873" for this suite. May 8 13:10:05.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:10:05.488: INFO: namespace kubectl-6873 deletion completed in 28.114987986s • [SLOW TEST:57.975 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:10:05.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:10:05.592: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 8 13:10:10.597: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 13:10:10.597: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 8 13:10:12.602: INFO: Creating deployment "test-rollover-deployment" May 8 13:10:12.670: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 8 13:10:14.678: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 8 13:10:14.685: INFO: Ensure that both replica sets have 1 created replica May 8 13:10:14.691: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 8 13:10:14.698: INFO: Updating deployment test-rollover-deployment May 8 13:10:14.698: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 8 13:10:16.749: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 8 13:10:16.754: INFO: Make sure deployment "test-rollover-deployment" is complete May 8 13:10:16.759: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:16.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540214, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:18.768: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:18.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540217, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:20.768: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:20.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540217, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:22.768: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:22.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540217, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:24.768: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:24.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540217, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:26.768: INFO: all replica sets need to contain the pod-template-hash label May 8 13:10:26.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540217, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724540212, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 13:10:28.769: INFO: May 8 13:10:28.769: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 8 13:10:28.778: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6480,SelfLink:/apis/apps/v1/namespaces/deployment-6480/deployments/test-rollover-deployment,UID:2db1d6af-2e7f-4d1c-9154-230ca580c4b3,ResourceVersion:9708920,Generation:2,CreationTimestamp:2020-05-08 13:10:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-08 13:10:12 +0000 UTC 2020-05-08 13:10:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-08 13:10:27 +0000 UTC 2020-05-08 13:10:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 8 13:10:28.781: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6480,SelfLink:/apis/apps/v1/namespaces/deployment-6480/replicasets/test-rollover-deployment-854595fc44,UID:b8fcb63a-c8a9-4817-8f9a-974398c16be5,ResourceVersion:9708907,Generation:2,CreationTimestamp:2020-05-08 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2db1d6af-2e7f-4d1c-9154-230ca580c4b3 0xc0028428e7 0xc0028428e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 13:10:28.781: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 8 13:10:28.782: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6480,SelfLink:/apis/apps/v1/namespaces/deployment-6480/replicasets/test-rollover-controller,UID:1e3baf1a-2b3b-47a2-8ad7-d28665a35cfe,ResourceVersion:9708918,Generation:2,CreationTimestamp:2020-05-08 13:10:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2db1d6af-2e7f-4d1c-9154-230ca580c4b3 0xc0028426cf 0xc002842700}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 13:10:28.782: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6480,SelfLink:/apis/apps/v1/namespaces/deployment-6480/replicasets/test-rollover-deployment-9b8b997cf,UID:9b37690d-dbf5-417f-bdf5-2f4e102d384e,ResourceVersion:9708871,Generation:2,CreationTimestamp:2020-05-08 13:10:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2db1d6af-2e7f-4d1c-9154-230ca580c4b3 0xc0028429b0 0xc0028429b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 13:10:28.786: INFO: Pod "test-rollover-deployment-854595fc44-5w7sh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5w7sh,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6480,SelfLink:/api/v1/namespaces/deployment-6480/pods/test-rollover-deployment-854595fc44-5w7sh,UID:99aa4e4b-88fb-4ab3-87dd-a8a036961255,ResourceVersion:9708883,Generation:0,CreationTimestamp:2020-05-08 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b8fcb63a-c8a9-4817-8f9a-974398c16be5 0xc002843bd7 0xc002843bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-89bkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-89bkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-89bkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002843c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002843c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:10:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:10:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:10:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:10:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.70,StartTime:2020-05-08 13:10:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-08 13:10:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3fc2de1d5b5d8e11be757cbf303670f645c85b5418dc8aed86ae6762fa3c59c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:10:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6480" for this suite. May 8 13:10:36.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:10:36.915: INFO: namespace deployment-6480 deletion completed in 8.125993882s • [SLOW TEST:31.426 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:10:36.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-622ee3e2-2e31-4ace-abb0-fba18b4ec104 STEP: Creating a pod to test consume configMaps May 8 13:10:37.043: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836" in namespace "configmap-5418" to be "success or failure" May 8 13:10:37.068: INFO: Pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836": Phase="Pending", Reason="", readiness=false. Elapsed: 25.383054ms May 8 13:10:39.090: INFO: Pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047352597s May 8 13:10:41.094: INFO: Pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836": Phase="Running", Reason="", readiness=true. Elapsed: 4.05104526s May 8 13:10:43.099: INFO: Pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056341034s STEP: Saw pod success May 8 13:10:43.099: INFO: Pod "pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836" satisfied condition "success or failure" May 8 13:10:43.104: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836 container configmap-volume-test: STEP: delete the pod May 8 13:10:43.128: INFO: Waiting for pod pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836 to disappear May 8 13:10:43.132: INFO: Pod pod-configmaps-2ed2cfb9-b937-48eb-9917-c103291f5836 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:10:43.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5418" for this suite. May 8 13:10:49.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:10:49.226: INFO: namespace configmap-5418 deletion completed in 6.090487256s • [SLOW TEST:12.308 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:10:49.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-x89k STEP: Creating a pod to test atomic-volume-subpath May 8 13:10:49.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x89k" in namespace "subpath-390" to be "success or failure" May 8 13:10:49.409: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Pending", Reason="", readiness=false. Elapsed: 28.037998ms May 8 13:10:51.413: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032222845s May 8 13:10:53.417: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 4.036179808s May 8 13:10:55.421: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 6.040304854s May 8 13:10:57.425: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 8.04410842s May 8 13:10:59.429: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 10.047553188s May 8 13:11:01.433: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 12.052036201s May 8 13:11:03.438: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 14.056926223s May 8 13:11:05.442: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 16.061234934s May 8 13:11:07.446: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 18.06512635s May 8 13:11:09.450: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 20.069412365s May 8 13:11:11.455: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Running", Reason="", readiness=true. Elapsed: 22.074120133s May 8 13:11:13.460: INFO: Pod "pod-subpath-test-configmap-x89k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.079015517s STEP: Saw pod success May 8 13:11:13.460: INFO: Pod "pod-subpath-test-configmap-x89k" satisfied condition "success or failure" May 8 13:11:13.463: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-x89k container test-container-subpath-configmap-x89k: STEP: delete the pod May 8 13:11:13.523: INFO: Waiting for pod pod-subpath-test-configmap-x89k to disappear May 8 13:11:13.543: INFO: Pod pod-subpath-test-configmap-x89k no longer exists STEP: Deleting pod pod-subpath-test-configmap-x89k May 8 13:11:13.544: INFO: Deleting pod "pod-subpath-test-configmap-x89k" in namespace "subpath-390" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:11:13.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-390" for this suite. May 8 13:11:19.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:11:19.647: INFO: namespace subpath-390 deletion completed in 6.098440941s • [SLOW TEST:30.420 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:11:19.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 8 13:11:19.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1589' May 8 13:11:20.140: INFO: stderr: "" May 8 13:11:20.140: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 8 13:11:21.146: INFO: Selector matched 1 pods for map[app:redis] May 8 13:11:21.146: INFO: Found 0 / 1 May 8 13:11:22.149: INFO: Selector matched 1 pods for map[app:redis] May 8 13:11:22.150: INFO: Found 0 / 1 May 8 13:11:23.146: INFO: Selector matched 1 pods for map[app:redis] May 8 13:11:23.146: INFO: Found 0 / 1 May 8 13:11:24.146: INFO: Selector matched 1 pods for map[app:redis] May 8 13:11:24.146: INFO: Found 1 / 1 May 8 13:11:24.146: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 13:11:24.150: INFO: Selector matched 1 pods for map[app:redis] May 8 13:11:24.150: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 8 13:11:24.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589' May 8 13:11:24.267: INFO: stderr: "" May 8 13:11:24.267: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 13:11:22.933 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 13:11:22.933 # Server started, Redis version 3.2.12\n1:M 08 May 13:11:22.933 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 13:11:22.933 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 8 13:11:24.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589 --tail=1' May 8 13:11:24.380: INFO: stderr: "" May 8 13:11:24.380: INFO: stdout: "1:M 08 May 13:11:22.933 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 8 13:11:24.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589 --limit-bytes=1' May 8 13:11:24.485: INFO: stderr: "" May 8 13:11:24.485: INFO: stdout: " " STEP: exposing timestamps May 8 13:11:24.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589 --tail=1 --timestamps' May 8 13:11:24.587: INFO: stderr: "" May 8 13:11:24.587: INFO: stdout: "2020-05-08T13:11:22.934052899Z 1:M 08 May 13:11:22.933 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 8 13:11:27.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589 --since=1s' May 8 13:11:27.187: INFO: stderr: "" May 8 13:11:27.187: INFO: stdout: "" May 8 13:11:27.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8sw49 redis-master --namespace=kubectl-1589 --since=24h' May 8 13:11:27.297: INFO: stderr: "" May 8 13:11:27.297: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 13:11:22.933 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 13:11:22.933 # Server started, Redis version 3.2.12\n1:M 08 May 13:11:22.933 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 13:11:22.933 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 8 13:11:27.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1589' May 8 13:11:27.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 13:11:27.413: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 8 13:11:27.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1589' May 8 13:11:27.524: INFO: stderr: "No resources found.\n" May 8 13:11:27.524: INFO: stdout: "" May 8 13:11:27.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1589 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 13:11:27.623: INFO: stderr: "" May 8 13:11:27.623: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:11:27.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1589" for this suite. May 8 13:11:33.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:11:33.727: INFO: namespace kubectl-1589 deletion completed in 6.10015207s • [SLOW TEST:14.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:11:33.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 13:11:33.824: INFO: Waiting up to 5m0s for pod "pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5" in namespace "emptydir-3144" to be "success or failure" May 8 13:11:33.843: INFO: Pod "pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.305212ms May 8 13:11:35.847: INFO: Pod "pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0230985s May 8 13:11:37.851: INFO: Pod "pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026653517s STEP: Saw pod success May 8 13:11:37.851: INFO: Pod "pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5" satisfied condition "success or failure" May 8 13:11:37.853: INFO: Trying to get logs from node iruya-worker pod pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5 container test-container: STEP: delete the pod May 8 13:11:37.872: INFO: Waiting for pod pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5 to disappear May 8 13:11:37.876: INFO: Pod pod-929f567d-0fc2-448c-b94d-63b64c7fd6e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:11:37.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3144" for this suite. May 8 13:11:43.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:11:43.986: INFO: namespace emptydir-3144 deletion completed in 6.106939977s • [SLOW TEST:10.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:11:43.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 8 13:11:44.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8633' May 8 13:11:44.295: INFO: stderr: "" May 8 13:11:44.295: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 13:11:44.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:11:44.454: INFO: stderr: "" May 8 13:11:44.454: INFO: stdout: "update-demo-nautilus-gqbfp update-demo-nautilus-rldh9 " May 8 13:11:44.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gqbfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:11:44.576: INFO: stderr: "" May 8 13:11:44.576: INFO: stdout: "" May 8 13:11:44.576: INFO: update-demo-nautilus-gqbfp is created but not running May 8 13:11:49.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:11:49.673: INFO: stderr: "" May 8 13:11:49.673: INFO: stdout: "update-demo-nautilus-gqbfp update-demo-nautilus-rldh9 " May 8 13:11:49.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gqbfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:11:49.774: INFO: stderr: "" May 8 13:11:49.774: INFO: stdout: "true" May 8 13:11:49.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gqbfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:11:49.878: INFO: stderr: "" May 8 13:11:49.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:11:49.878: INFO: validating pod update-demo-nautilus-gqbfp May 8 13:11:49.882: INFO: got data: { "image": "nautilus.jpg" } May 8 13:11:49.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:11:49.882: INFO: update-demo-nautilus-gqbfp is verified up and running May 8 13:11:49.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:11:49.970: INFO: stderr: "" May 8 13:11:49.970: INFO: stdout: "true" May 8 13:11:49.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:11:50.058: INFO: stderr: "" May 8 13:11:50.058: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:11:50.058: INFO: validating pod update-demo-nautilus-rldh9 May 8 13:11:50.061: INFO: got data: { "image": "nautilus.jpg" } May 8 13:11:50.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:11:50.062: INFO: update-demo-nautilus-rldh9 is verified up and running STEP: scaling down the replication controller May 8 13:11:50.064: INFO: scanned /root for discovery docs: May 8 13:11:50.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8633' May 8 13:11:51.244: INFO: stderr: "" May 8 13:11:51.244: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 13:11:51.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:11:51.348: INFO: stderr: "" May 8 13:11:51.348: INFO: stdout: "update-demo-nautilus-gqbfp update-demo-nautilus-rldh9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 8 13:11:56.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:11:56.446: INFO: stderr: "" May 8 13:11:56.446: INFO: stdout: "update-demo-nautilus-gqbfp update-demo-nautilus-rldh9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 8 13:12:01.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:12:01.546: INFO: stderr: "" May 8 13:12:01.546: INFO: stdout: "update-demo-nautilus-gqbfp update-demo-nautilus-rldh9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 8 13:12:06.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:12:06.643: INFO: stderr: "" May 8 13:12:06.643: INFO: stdout: "update-demo-nautilus-rldh9 " May 8 13:12:06.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:06.731: INFO: stderr: "" May 8 13:12:06.731: INFO: stdout: "true" May 8 13:12:06.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:06.820: INFO: stderr: "" May 8 13:12:06.820: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:12:06.820: INFO: validating pod update-demo-nautilus-rldh9 May 8 13:12:06.823: INFO: got data: { "image": "nautilus.jpg" } May 8 13:12:06.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:12:06.823: INFO: update-demo-nautilus-rldh9 is verified up and running STEP: scaling up the replication controller May 8 13:12:06.825: INFO: scanned /root for discovery docs: May 8 13:12:06.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8633' May 8 13:12:07.964: INFO: stderr: "" May 8 13:12:07.964: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 13:12:07.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:12:08.063: INFO: stderr: "" May 8 13:12:08.063: INFO: stdout: "update-demo-nautilus-pglrs update-demo-nautilus-rldh9 " May 8 13:12:08.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pglrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:08.157: INFO: stderr: "" May 8 13:12:08.157: INFO: stdout: "" May 8 13:12:08.157: INFO: update-demo-nautilus-pglrs is created but not running May 8 13:12:13.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8633' May 8 13:12:13.266: INFO: stderr: "" May 8 13:12:13.267: INFO: stdout: "update-demo-nautilus-pglrs update-demo-nautilus-rldh9 " May 8 13:12:13.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pglrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:13.356: INFO: stderr: "" May 8 13:12:13.356: INFO: stdout: "true" May 8 13:12:13.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pglrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:13.443: INFO: stderr: "" May 8 13:12:13.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:12:13.443: INFO: validating pod update-demo-nautilus-pglrs May 8 13:12:13.448: INFO: got data: { "image": "nautilus.jpg" } May 8 13:12:13.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:12:13.448: INFO: update-demo-nautilus-pglrs is verified up and running May 8 13:12:13.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:13.530: INFO: stderr: "" May 8 13:12:13.530: INFO: stdout: "true" May 8 13:12:13.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rldh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8633' May 8 13:12:13.617: INFO: stderr: "" May 8 13:12:13.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 13:12:13.617: INFO: validating pod update-demo-nautilus-rldh9 May 8 13:12:13.620: INFO: got data: { "image": "nautilus.jpg" } May 8 13:12:13.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 13:12:13.620: INFO: update-demo-nautilus-rldh9 is verified up and running STEP: using delete to clean up resources May 8 13:12:13.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8633' May 8 13:12:13.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 13:12:13.720: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 13:12:13.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8633' May 8 13:12:13.823: INFO: stderr: "No resources found.\n" May 8 13:12:13.823: INFO: stdout: "" May 8 13:12:13.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8633 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 13:12:13.914: INFO: stderr: "" May 8 13:12:13.914: INFO: stdout: "update-demo-nautilus-pglrs\nupdate-demo-nautilus-rldh9\n" May 8 13:12:14.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8633' May 8 13:12:14.528: INFO: stderr: "No resources found.\n" May 8 13:12:14.528: INFO: stdout: "" May 8 13:12:14.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8633 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 13:12:14.736: INFO: stderr: "" May 8 13:12:14.736: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:12:14.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8633" for this suite. May 8 13:12:36.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:12:36.998: INFO: namespace kubectl-8633 deletion completed in 22.256804812s • [SLOW TEST:53.010 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:12:36.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 8 13:12:37.068: INFO: Waiting up to 5m0s for pod "client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac" in namespace "containers-4796" to be "success or failure" May 8 13:12:37.072: INFO: Pod "client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.964842ms May 8 13:12:39.076: INFO: Pod "client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390616s May 8 13:12:41.091: INFO: Pod "client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0229984s STEP: Saw pod success May 8 13:12:41.091: INFO: Pod "client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac" satisfied condition "success or failure" May 8 13:12:41.101: INFO: Trying to get logs from node iruya-worker pod client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac container test-container: STEP: delete the pod May 8 13:12:41.140: INFO: Waiting for pod client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac to disappear May 8 13:12:41.156: INFO: Pod client-containers-5aed241a-12cb-4565-8d4a-8aedb0d537ac no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:12:41.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4796" for this suite. May 8 13:12:47.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:12:47.248: INFO: namespace containers-4796 deletion completed in 6.088372847s • [SLOW TEST:10.250 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:12:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:12:47.350: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.655971ms) May 8 13:12:47.353: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.847677ms) May 8 13:12:47.357: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.895905ms) May 8 13:12:47.361: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.325426ms) May 8 13:12:47.364: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.9563ms) May 8 13:12:47.367: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.558493ms) May 8 13:12:47.370: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.888323ms) May 8 13:12:47.373: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.610672ms) May 8 13:12:47.375: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.436805ms) May 8 13:12:47.378: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.794538ms) May 8 13:12:47.381: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.680953ms) May 8 13:12:47.384: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.560005ms) May 8 13:12:47.416: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 32.083321ms) May 8 13:12:47.426: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 9.897741ms) May 8 13:12:47.428: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.051029ms) May 8 13:12:47.430: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.117965ms) May 8 13:12:47.432: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.953988ms) May 8 13:12:47.434: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.993866ms) May 8 13:12:47.436: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.932348ms) May 8 13:12:47.438: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.905867ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:12:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-777" for this suite. May 8 13:12:53.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:12:53.555: INFO: namespace proxy-777 deletion completed in 6.115100627s • [SLOW TEST:6.307 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:12:53.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0508 13:13:24.200638 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:13:24.200: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:13:24.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9406" for this suite. May 8 13:13:30.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:13:30.315: INFO: namespace gc-9406 deletion completed in 6.111991429s • [SLOW TEST:36.760 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:13:30.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-cae60bd3-a5ad-460f-83b4-902c83be20a5 STEP: Creating a pod to test consume configMaps May 8 13:13:30.475: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70" in namespace "projected-8719" to be "success or failure" May 8 13:13:30.479: INFO: Pod "pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095846ms May 8 13:13:32.484: INFO: Pod "pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008343775s May 8 13:13:34.488: INFO: Pod "pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012801119s STEP: Saw pod success May 8 13:13:34.488: INFO: Pod "pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70" satisfied condition "success or failure" May 8 13:13:34.492: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70 container projected-configmap-volume-test: STEP: delete the pod May 8 13:13:34.568: INFO: Waiting for pod pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70 to disappear May 8 13:13:34.582: INFO: Pod pod-projected-configmaps-29fa7323-afc2-4a90-9e91-92e4e6171f70 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:13:34.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8719" for this suite. May 8 13:13:40.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:13:40.694: INFO: namespace projected-8719 deletion completed in 6.109094793s • [SLOW TEST:10.378 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:13:40.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0508 13:13:50.833695 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:13:50.833: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:13:50.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8442" for this suite. May 8 13:13:56.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:13:56.951: INFO: namespace gc-8442 deletion completed in 6.113701722s • [SLOW TEST:16.256 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:13:56.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 8 13:13:57.035: INFO: Pod name pod-release: Found 0 pods out of 1 May 8 13:14:02.040: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:14:03.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-143" for this suite. May 8 13:14:09.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:14:09.332: INFO: namespace replication-controller-143 deletion completed in 6.241610644s • [SLOW TEST:12.381 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:14:09.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:14:09.538: INFO: Create a RollingUpdate DaemonSet May 8 13:14:09.541: INFO: Check that daemon pods launch on every node of the cluster May 8 13:14:09.563: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:09.602: INFO: Number of nodes with available pods: 0 May 8 13:14:09.602: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:10.607: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:10.612: INFO: Number of nodes with available pods: 0 May 8 13:14:10.612: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:11.607: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:11.611: INFO: Number of nodes with available pods: 0 May 8 13:14:11.611: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:12.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:12.643: INFO: Number of nodes with available pods: 0 May 8 13:14:12.643: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:13.607: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:13.611: INFO: Number of nodes with available pods: 1 May 8 13:14:13.611: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:14.608: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:14.611: INFO: Number of nodes with available pods: 2 May 8 13:14:14.611: INFO: Number of running nodes: 2, number of available pods: 2 May 8 13:14:14.611: INFO: Update the DaemonSet to trigger a rollout May 8 13:14:14.617: INFO: Updating DaemonSet daemon-set May 8 13:14:18.677: INFO: Roll back the DaemonSet before rollout is complete May 8 13:14:18.684: INFO: Updating DaemonSet daemon-set May 8 13:14:18.684: INFO: Make sure DaemonSet rollback is complete May 8 13:14:18.707: INFO: Wrong image for pod: daemon-set-cg7qw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 8 13:14:18.707: INFO: Pod daemon-set-cg7qw is not available May 8 13:14:18.726: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:19.730: INFO: Wrong image for pod: daemon-set-cg7qw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 8 13:14:19.730: INFO: Pod daemon-set-cg7qw is not available May 8 13:14:19.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:20.731: INFO: Wrong image for pod: daemon-set-cg7qw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 8 13:14:20.731: INFO: Pod daemon-set-cg7qw is not available May 8 13:14:20.736: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:21.730: INFO: Wrong image for pod: daemon-set-cg7qw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 8 13:14:21.730: INFO: Pod daemon-set-cg7qw is not available May 8 13:14:21.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:22.731: INFO: Pod daemon-set-k2t6v is not available May 8 13:14:22.735: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1659, will wait for the garbage collector to delete the pods May 8 13:14:22.801: INFO: Deleting DaemonSet.extensions daemon-set took: 6.41742ms May 8 13:14:23.101: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.291213ms May 8 13:14:31.904: INFO: Number of nodes with available pods: 0 May 8 13:14:31.904: INFO: Number of running nodes: 0, number of available pods: 0 May 8 13:14:31.907: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1659/daemonsets","resourceVersion":"9709904"},"items":null} May 8 13:14:31.910: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1659/pods","resourceVersion":"9709904"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:14:31.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1659" for this suite. May 8 13:14:37.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:14:38.031: INFO: namespace daemonsets-1659 deletion completed in 6.089483922s • [SLOW TEST:28.698 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:14:38.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 8 13:14:38.120: INFO: Waiting up to 5m0s for pod "downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b" in namespace "downward-api-1034" to be "success or failure" May 8 13:14:38.122: INFO: Pod "downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434092ms May 8 13:14:40.127: INFO: Pod "downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006605906s May 8 13:14:42.131: INFO: Pod "downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010916612s STEP: Saw pod success May 8 13:14:42.131: INFO: Pod "downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b" satisfied condition "success or failure" May 8 13:14:42.135: INFO: Trying to get logs from node iruya-worker pod downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b container dapi-container: STEP: delete the pod May 8 13:14:42.154: INFO: Waiting for pod downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b to disappear May 8 13:14:42.195: INFO: Pod downward-api-f85e347f-5a8c-45a0-b208-9f31d3484a0b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:14:42.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1034" for this suite. May 8 13:14:48.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:14:48.308: INFO: namespace downward-api-1034 deletion completed in 6.109143744s • [SLOW TEST:10.277 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:14:48.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:14:48.456: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 8 13:14:48.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:48.468: INFO: Number of nodes with available pods: 0 May 8 13:14:48.468: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:49.532: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:49.535: INFO: Number of nodes with available pods: 0 May 8 13:14:49.535: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:50.506: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:50.643: INFO: Number of nodes with available pods: 0 May 8 13:14:50.643: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:51.471: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:51.474: INFO: Number of nodes with available pods: 0 May 8 13:14:51.474: INFO: Node iruya-worker is running more than one daemon pod May 8 13:14:52.474: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:52.477: INFO: Number of nodes with available pods: 1 May 8 13:14:52.477: INFO: Node iruya-worker2 is running more than one daemon pod May 8 13:14:53.482: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:53.485: INFO: Number of nodes with available pods: 2 May 8 13:14:53.485: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 8 13:14:53.510: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:53.510: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:53.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:54.535: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:54.535: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:54.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:55.604: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:55.604: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:55.608: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:56.541: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:56.541: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:56.544: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:57.535: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:57.535: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:57.535: INFO: Pod daemon-set-vn2dt is not available May 8 13:14:57.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:58.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:58.536: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:58.536: INFO: Pod daemon-set-vn2dt is not available May 8 13:14:58.554: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:14:59.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:59.536: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:14:59.536: INFO: Pod daemon-set-vn2dt is not available May 8 13:14:59.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:00.537: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:00.537: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:00.537: INFO: Pod daemon-set-vn2dt is not available May 8 13:15:00.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:01.543: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:01.543: INFO: Wrong image for pod: daemon-set-vn2dt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:01.543: INFO: Pod daemon-set-vn2dt is not available May 8 13:15:01.547: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:02.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:02.537: INFO: Pod daemon-set-w5qbz is not available May 8 13:15:02.543: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:03.664: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:03.664: INFO: Pod daemon-set-w5qbz is not available May 8 13:15:03.669: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:04.535: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:04.535: INFO: Pod daemon-set-w5qbz is not available May 8 13:15:04.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:05.578: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:05.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:06.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:06.536: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:06.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:07.546: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:07.546: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:07.550: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:08.535: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:08.536: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:08.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:09.568: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:09.568: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:09.571: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:10.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:10.536: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:10.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:11.536: INFO: Wrong image for pod: daemon-set-r42z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 8 13:15:11.536: INFO: Pod daemon-set-r42z9 is not available May 8 13:15:11.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:12.536: INFO: Pod daemon-set-crr5l is not available May 8 13:15:12.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 8 13:15:12.544: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:12.548: INFO: Number of nodes with available pods: 1 May 8 13:15:12.548: INFO: Node iruya-worker is running more than one daemon pod May 8 13:15:13.577: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:13.609: INFO: Number of nodes with available pods: 1 May 8 13:15:13.609: INFO: Node iruya-worker is running more than one daemon pod May 8 13:15:14.551: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:14.554: INFO: Number of nodes with available pods: 1 May 8 13:15:14.554: INFO: Node iruya-worker is running more than one daemon pod May 8 13:15:15.553: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 13:15:15.557: INFO: Number of nodes with available pods: 2 May 8 13:15:15.557: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7525, will wait for the garbage collector to delete the pods May 8 13:15:15.630: INFO: Deleting DaemonSet.extensions daemon-set took: 5.788149ms May 8 13:15:15.930: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27646ms May 8 13:15:22.234: INFO: Number of nodes with available pods: 0 May 8 13:15:22.234: INFO: Number of running nodes: 0, number of available pods: 0 May 8 13:15:22.237: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7525/daemonsets","resourceVersion":"9710129"},"items":null} May 8 13:15:22.239: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7525/pods","resourceVersion":"9710129"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:15:22.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7525" for this suite. May 8 13:15:28.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:15:28.345: INFO: namespace daemonsets-7525 deletion completed in 6.094312139s • [SLOW TEST:40.036 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:15:28.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-mjg5 STEP: Creating a pod to test atomic-volume-subpath May 8 13:15:28.520: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mjg5" in namespace "subpath-7276" to be "success or failure" May 8 13:15:28.540: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.528118ms May 8 13:15:30.543: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023024395s May 8 13:15:32.547: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 4.02744943s May 8 13:15:34.552: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 6.031857762s May 8 13:15:36.556: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 8.035679679s May 8 13:15:38.560: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 10.039893184s May 8 13:15:40.564: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 12.044451024s May 8 13:15:42.569: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 14.048770289s May 8 13:15:44.573: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 16.0534528s May 8 13:15:46.578: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 18.058171376s May 8 13:15:48.583: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 20.062591545s May 8 13:15:50.588: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Running", Reason="", readiness=true. Elapsed: 22.068208572s May 8 13:15:52.592: INFO: Pod "pod-subpath-test-configmap-mjg5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071946958s STEP: Saw pod success May 8 13:15:52.592: INFO: Pod "pod-subpath-test-configmap-mjg5" satisfied condition "success or failure" May 8 13:15:52.594: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-mjg5 container test-container-subpath-configmap-mjg5: STEP: delete the pod May 8 13:15:52.657: INFO: Waiting for pod pod-subpath-test-configmap-mjg5 to disappear May 8 13:15:52.669: INFO: Pod pod-subpath-test-configmap-mjg5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-mjg5 May 8 13:15:52.669: INFO: Deleting pod "pod-subpath-test-configmap-mjg5" in namespace "subpath-7276" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:15:52.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7276" for this suite. May 8 13:15:58.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:15:58.768: INFO: namespace subpath-7276 deletion completed in 6.09420387s • [SLOW TEST:30.422 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:15:58.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 8 13:16:03.464: INFO: Successfully updated pod "labelsupdate202b09f8-8482-490b-93c8-455067f6513e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:16:05.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9245" for this suite. May 8 13:16:27.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:16:27.599: INFO: namespace downward-api-9245 deletion completed in 22.085746182s • [SLOW TEST:28.831 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:16:27.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:16:27.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 8 13:16:27.826: INFO: stderr: "" May 8 13:16:27.826: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:16:27.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6140" for this suite. May 8 13:16:33.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:16:33.964: INFO: namespace kubectl-6140 deletion completed in 6.133294169s • [SLOW TEST:6.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:16:33.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:16:34.035: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.661003ms) May 8 13:16:34.040: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.273895ms) May 8 13:16:34.042: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.359824ms) May 8 13:16:34.045: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.535771ms) May 8 13:16:34.066: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 20.859686ms) May 8 13:16:34.069: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.317009ms) May 8 13:16:34.072: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.160744ms) May 8 13:16:34.076: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.64038ms) May 8 13:16:34.080: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.428821ms) May 8 13:16:34.083: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.624701ms) May 8 13:16:34.086: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.187615ms) May 8 13:16:34.090: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.232008ms) May 8 13:16:34.092: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.622312ms) May 8 13:16:34.096: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.385838ms) May 8 13:16:34.099: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.129276ms) May 8 13:16:34.102: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.24283ms) May 8 13:16:34.105: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.261074ms) May 8 13:16:34.109: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.079308ms) May 8 13:16:34.112: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.269127ms) May 8 13:16:34.115: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.341353ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:16:34.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6364" for this suite. May 8 13:16:40.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:16:40.219: INFO: namespace proxy-6364 deletion completed in 6.100243506s • [SLOW TEST:6.255 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:16:40.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:16:44.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1073" for this suite. May 8 13:17:34.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:17:34.531: INFO: namespace kubelet-test-1073 deletion completed in 50.109787635s • [SLOW TEST:54.311 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:17:34.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-0de234ff-76b8-469a-ba71-721adb4dedc6 STEP: Creating a pod to test consume secrets May 8 13:17:34.635: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8" in namespace "projected-1782" to be "success or failure" May 8 13:17:34.640: INFO: Pod "pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.849675ms May 8 13:17:36.645: INFO: Pod "pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009284149s May 8 13:17:38.649: INFO: Pod "pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014154172s STEP: Saw pod success May 8 13:17:38.649: INFO: Pod "pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8" satisfied condition "success or failure" May 8 13:17:38.653: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8 container projected-secret-volume-test: STEP: delete the pod May 8 13:17:38.676: INFO: Waiting for pod pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8 to disappear May 8 13:17:38.694: INFO: Pod pod-projected-secrets-628855b8-0a0e-45f0-a9cf-2333f48dcca8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:17:38.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1782" for this suite. May 8 13:17:44.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:17:44.799: INFO: namespace projected-1782 deletion completed in 6.101436563s • [SLOW TEST:10.267 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:17:44.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 8 13:17:44.849: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix544505449/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:17:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4320" for this suite. May 8 13:17:50.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:17:51.007: INFO: namespace kubectl-4320 deletion completed in 6.09235786s • [SLOW TEST:6.208 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:17:51.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-rr4t STEP: Creating a pod to test atomic-volume-subpath May 8 13:17:51.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rr4t" in namespace "subpath-3317" to be "success or failure" May 8 13:17:51.120: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.84658ms May 8 13:17:53.124: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015289902s May 8 13:17:55.151: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 4.041492633s May 8 13:17:57.155: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 6.046348232s May 8 13:17:59.160: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 8.050985454s May 8 13:18:01.165: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 10.056017894s May 8 13:18:03.170: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.060698296s May 8 13:18:05.173: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 14.06426135s May 8 13:18:07.178: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 16.069314122s May 8 13:18:09.182: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 18.073247569s May 8 13:18:11.186: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 20.077207522s May 8 13:18:13.191: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Running", Reason="", readiness=true. Elapsed: 22.081761949s May 8 13:18:15.195: INFO: Pod "pod-subpath-test-secret-rr4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.086322153s STEP: Saw pod success May 8 13:18:15.195: INFO: Pod "pod-subpath-test-secret-rr4t" satisfied condition "success or failure" May 8 13:18:15.199: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-rr4t container test-container-subpath-secret-rr4t: STEP: delete the pod May 8 13:18:15.411: INFO: Waiting for pod pod-subpath-test-secret-rr4t to disappear May 8 13:18:15.498: INFO: Pod pod-subpath-test-secret-rr4t no longer exists STEP: Deleting pod pod-subpath-test-secret-rr4t May 8 13:18:15.498: INFO: Deleting pod "pod-subpath-test-secret-rr4t" in namespace "subpath-3317" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:18:15.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3317" for this suite. May 8 13:18:21.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:18:21.606: INFO: namespace subpath-3317 deletion completed in 6.101959821s • [SLOW TEST:30.599 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:18:21.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 8 13:18:21.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6895' May 8 13:18:24.379: INFO: stderr: "" May 8 13:18:24.379: INFO: stdout: "pod/pause created\n" May 8 13:18:24.379: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 8 13:18:24.379: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6895" to be "running and ready" May 8 13:18:24.410: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.373233ms May 8 13:18:26.415: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035586854s May 8 13:18:28.535: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.155760932s May 8 13:18:28.535: INFO: Pod "pause" satisfied condition "running and ready" May 8 13:18:28.535: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 8 13:18:28.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6895' May 8 13:18:28.630: INFO: stderr: "" May 8 13:18:28.630: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 8 13:18:28.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6895' May 8 13:18:28.830: INFO: stderr: "" May 8 13:18:28.830: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 8 13:18:28.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6895' May 8 13:18:28.918: INFO: stderr: "" May 8 13:18:28.918: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 8 13:18:28.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6895' May 8 13:18:29.003: INFO: stderr: "" May 8 13:18:29.003: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 8 13:18:29.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6895' May 8 13:18:29.144: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 13:18:29.144: INFO: stdout: "pod \"pause\" force deleted\n" May 8 13:18:29.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6895' May 8 13:18:29.254: INFO: stderr: "No resources found.\n" May 8 13:18:29.254: INFO: stdout: "" May 8 13:18:29.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6895 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 13:18:29.350: INFO: stderr: "" May 8 13:18:29.351: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:18:29.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6895" for this suite. May 8 13:18:35.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:18:35.582: INFO: namespace kubectl-6895 deletion completed in 6.228402369s • [SLOW TEST:13.976 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:18:35.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:18:35.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:18:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3278" for this suite. May 8 13:18:42.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:18:42.859: INFO: namespace custom-resource-definition-3278 deletion completed in 6.084324265s • [SLOW TEST:7.277 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:18:42.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 8 13:18:42.925: INFO: Waiting up to 5m0s for pod "var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd" in namespace "var-expansion-8337" to be "success or failure" May 8 13:18:42.930: INFO: Pod "var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069887ms May 8 13:18:44.933: INFO: Pod "var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007514006s May 8 13:18:46.937: INFO: Pod "var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0114409s STEP: Saw pod success May 8 13:18:46.937: INFO: Pod "var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd" satisfied condition "success or failure" May 8 13:18:46.939: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd container dapi-container: STEP: delete the pod May 8 13:18:46.967: INFO: Waiting for pod var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd to disappear May 8 13:18:46.971: INFO: Pod var-expansion-294f1e3c-6584-4f35-8256-fb7cce9556dd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:18:46.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8337" for this suite. May 8 13:18:52.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:18:53.068: INFO: namespace var-expansion-8337 deletion completed in 6.093012641s • [SLOW TEST:10.209 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:18:53.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 8 13:18:57.708: INFO: Successfully updated pod "annotationupdateaae1dde3-2d24-4d19-aa8c-51c4e866408a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:18:59.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9334" for this suite. May 8 13:19:21.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:19:21.850: INFO: namespace projected-9334 deletion completed in 22.116867909s • [SLOW TEST:28.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:19:21.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 8 13:19:22.746: INFO: Pod name wrapped-volume-race-ea934f82-fb65-45f7-a12d-fa37243d743c: Found 0 pods out of 5 May 8 13:19:27.755: INFO: Pod name wrapped-volume-race-ea934f82-fb65-45f7-a12d-fa37243d743c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ea934f82-fb65-45f7-a12d-fa37243d743c in namespace emptydir-wrapper-7990, will wait for the garbage collector to delete the pods May 8 13:19:41.839: INFO: Deleting ReplicationController wrapped-volume-race-ea934f82-fb65-45f7-a12d-fa37243d743c took: 5.710786ms May 8 13:19:42.139: INFO: Terminating ReplicationController wrapped-volume-race-ea934f82-fb65-45f7-a12d-fa37243d743c pods took: 300.244699ms STEP: Creating RC which spawns configmap-volume pods May 8 13:20:22.481: INFO: Pod name wrapped-volume-race-b02617d9-91c6-4e2c-b6ed-d38c376b3b3d: Found 0 pods out of 5 May 8 13:20:27.490: INFO: Pod name wrapped-volume-race-b02617d9-91c6-4e2c-b6ed-d38c376b3b3d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b02617d9-91c6-4e2c-b6ed-d38c376b3b3d in namespace emptydir-wrapper-7990, will wait for the garbage collector to delete the pods May 8 13:20:41.596: INFO: Deleting ReplicationController wrapped-volume-race-b02617d9-91c6-4e2c-b6ed-d38c376b3b3d took: 9.89861ms May 8 13:20:41.897: INFO: Terminating ReplicationController wrapped-volume-race-b02617d9-91c6-4e2c-b6ed-d38c376b3b3d pods took: 300.434966ms STEP: Creating RC which spawns configmap-volume pods May 8 13:21:23.223: INFO: Pod name wrapped-volume-race-9ad93c50-df09-432e-a091-af39263647e3: Found 0 pods out of 5 May 8 13:21:28.233: INFO: Pod name wrapped-volume-race-9ad93c50-df09-432e-a091-af39263647e3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9ad93c50-df09-432e-a091-af39263647e3 in namespace emptydir-wrapper-7990, will wait for the garbage collector to delete the pods May 8 13:21:44.548: INFO: Deleting ReplicationController wrapped-volume-race-9ad93c50-df09-432e-a091-af39263647e3 took: 10.153225ms May 8 13:21:44.949: INFO: Terminating ReplicationController wrapped-volume-race-9ad93c50-df09-432e-a091-af39263647e3 pods took: 400.383651ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:22:33.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7990" for this suite. May 8 13:22:41.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:22:41.454: INFO: namespace emptydir-wrapper-7990 deletion completed in 8.196817572s • [SLOW TEST:199.603 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:22:41.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-6592d397-abed-4d72-ad99-a8fc28f30a8c STEP: Creating a pod to test consume secrets May 8 13:22:41.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788" in namespace "projected-6940" to be "success or failure" May 8 13:22:41.562: INFO: Pod "pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634359ms May 8 13:22:43.565: INFO: Pod "pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010283219s May 8 13:22:45.569: INFO: Pod "pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014060654s STEP: Saw pod success May 8 13:22:45.569: INFO: Pod "pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788" satisfied condition "success or failure" May 8 13:22:45.571: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788 container projected-secret-volume-test: STEP: delete the pod May 8 13:22:45.609: INFO: Waiting for pod pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788 to disappear May 8 13:22:45.616: INFO: Pod pod-projected-secrets-6ed33d2d-4bb5-47c6-a2b7-f6a1605d0788 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:22:45.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6940" for this suite. May 8 13:22:51.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:22:51.706: INFO: namespace projected-6940 deletion completed in 6.087223579s • [SLOW TEST:10.252 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:22:51.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 8 13:22:51.821: INFO: namespace kubectl-2164 May 8 13:22:51.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2164' May 8 13:22:52.191: INFO: stderr: "" May 8 13:22:52.191: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 8 13:22:53.196: INFO: Selector matched 1 pods for map[app:redis] May 8 13:22:53.196: INFO: Found 0 / 1 May 8 13:22:54.196: INFO: Selector matched 1 pods for map[app:redis] May 8 13:22:54.197: INFO: Found 0 / 1 May 8 13:22:55.196: INFO: Selector matched 1 pods for map[app:redis] May 8 13:22:55.196: INFO: Found 0 / 1 May 8 13:22:56.196: INFO: Selector matched 1 pods for map[app:redis] May 8 13:22:56.197: INFO: Found 1 / 1 May 8 13:22:56.197: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 8 13:22:56.200: INFO: Selector matched 1 pods for map[app:redis] May 8 13:22:56.200: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 13:22:56.200: INFO: wait on redis-master startup in kubectl-2164 May 8 13:22:56.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kldck redis-master --namespace=kubectl-2164' May 8 13:22:56.310: INFO: stderr: "" May 8 13:22:56.310: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 May 13:22:55.143 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 May 13:22:55.143 # Server started, Redis version 3.2.12\n1:M 08 May 13:22:55.143 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 May 13:22:55.143 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 8 13:22:56.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2164' May 8 13:22:56.503: INFO: stderr: "" May 8 13:22:56.503: INFO: stdout: "service/rm2 exposed\n" May 8 13:22:56.513: INFO: Service rm2 in namespace kubectl-2164 found. STEP: exposing service May 8 13:22:58.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2164' May 8 13:22:58.659: INFO: stderr: "" May 8 13:22:58.659: INFO: stdout: "service/rm3 exposed\n" May 8 13:22:58.670: INFO: Service rm3 in namespace kubectl-2164 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:23:00.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2164" for this suite. May 8 13:23:24.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:23:24.814: INFO: namespace kubectl-2164 deletion completed in 24.131473634s • [SLOW TEST:33.107 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:23:24.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:23:25.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e" in namespace "projected-3007" to be "success or failure" May 8 13:23:25.054: INFO: Pod "downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.800137ms May 8 13:23:27.107: INFO: Pod "downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106482546s May 8 13:23:29.111: INFO: Pod "downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110581804s STEP: Saw pod success May 8 13:23:29.111: INFO: Pod "downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e" satisfied condition "success or failure" May 8 13:23:29.114: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e container client-container: STEP: delete the pod May 8 13:23:29.139: INFO: Waiting for pod downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e to disappear May 8 13:23:29.179: INFO: Pod downwardapi-volume-317ad6b4-c31e-468b-a0c2-426850253f5e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:23:29.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3007" for this suite. May 8 13:23:35.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:23:35.294: INFO: namespace projected-3007 deletion completed in 6.111766808s • [SLOW TEST:10.480 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:23:35.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0508 13:23:36.065357 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 13:23:36.065: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:23:36.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9105" for this suite. May 8 13:23:42.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:23:42.242: INFO: namespace gc-9105 deletion completed in 6.163947874s • [SLOW TEST:6.947 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:23:42.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 13:23:50.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:23:50.450: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:23:52.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:23:52.454: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:23:54.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:23:54.455: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:23:56.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:23:56.455: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:23:58.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:23:58.455: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:24:00.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:24:00.455: INFO: Pod pod-with-poststart-http-hook still exists May 8 13:24:02.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 8 13:24:02.455: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:24:02.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9162" for this suite. May 8 13:24:24.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:24:24.602: INFO: namespace container-lifecycle-hook-9162 deletion completed in 22.142605471s • [SLOW TEST:42.360 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:24:24.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 13:24:24.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5817' May 8 13:24:24.759: INFO: stderr: "" May 8 13:24:24.759: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 8 13:24:24.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5817' May 8 13:24:32.170: INFO: stderr: "" May 8 13:24:32.170: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:24:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5817" for this suite. May 8 13:24:38.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:24:38.296: INFO: namespace kubectl-5817 deletion completed in 6.114021564s • [SLOW TEST:13.694 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:24:38.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 8 13:24:38.423: INFO: Waiting up to 5m0s for pod "var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0" in namespace "var-expansion-5713" to be "success or failure" May 8 13:24:38.427: INFO: Pod "var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.663432ms May 8 13:24:40.445: INFO: Pod "var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022195205s May 8 13:24:42.449: INFO: Pod "var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026406693s STEP: Saw pod success May 8 13:24:42.450: INFO: Pod "var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0" satisfied condition "success or failure" May 8 13:24:42.452: INFO: Trying to get logs from node iruya-worker pod var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0 container dapi-container: STEP: delete the pod May 8 13:24:42.498: INFO: Waiting for pod var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0 to disappear May 8 13:24:42.505: INFO: Pod var-expansion-6f82a84a-b315-4da9-a1e5-5d2322bed4f0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:24:42.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5713" for this suite. May 8 13:24:48.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:24:48.596: INFO: namespace var-expansion-5713 deletion completed in 6.088192077s • [SLOW TEST:10.299 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:24:48.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 13:24:48.697: INFO: Waiting up to 5m0s for pod "pod-06c69c13-2110-4338-8e66-02a919f31c1c" in namespace "emptydir-395" to be "success or failure" May 8 13:24:48.724: INFO: Pod "pod-06c69c13-2110-4338-8e66-02a919f31c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.436486ms May 8 13:24:50.742: INFO: Pod "pod-06c69c13-2110-4338-8e66-02a919f31c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045883599s May 8 13:24:52.747: INFO: Pod "pod-06c69c13-2110-4338-8e66-02a919f31c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050399313s STEP: Saw pod success May 8 13:24:52.747: INFO: Pod "pod-06c69c13-2110-4338-8e66-02a919f31c1c" satisfied condition "success or failure" May 8 13:24:52.751: INFO: Trying to get logs from node iruya-worker pod pod-06c69c13-2110-4338-8e66-02a919f31c1c container test-container: STEP: delete the pod May 8 13:24:52.806: INFO: Waiting for pod pod-06c69c13-2110-4338-8e66-02a919f31c1c to disappear May 8 13:24:52.952: INFO: Pod pod-06c69c13-2110-4338-8e66-02a919f31c1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:24:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-395" for this suite. May 8 13:24:58.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:24:59.069: INFO: namespace emptydir-395 deletion completed in 6.11232252s • [SLOW TEST:10.473 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:24:59.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-02e31fd7-4578-4232-b9df-5e65630c0614 STEP: Creating a pod to test consume secrets May 8 13:24:59.200: INFO: Waiting up to 5m0s for pod "pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253" in namespace "secrets-1204" to be "success or failure" May 8 13:24:59.231: INFO: Pod "pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253": Phase="Pending", Reason="", readiness=false. Elapsed: 30.909415ms May 8 13:25:01.234: INFO: Pod "pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034412869s May 8 13:25:03.237: INFO: Pod "pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037595907s STEP: Saw pod success May 8 13:25:03.237: INFO: Pod "pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253" satisfied condition "success or failure" May 8 13:25:03.239: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253 container secret-volume-test: STEP: delete the pod May 8 13:25:03.271: INFO: Waiting for pod pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253 to disappear May 8 13:25:03.275: INFO: Pod pod-secrets-46013cfe-e6d1-4e46-8630-b067efa8d253 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:25:03.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1204" for this suite. May 8 13:25:09.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:25:09.341: INFO: namespace secrets-1204 deletion completed in 6.064057149s • [SLOW TEST:10.272 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:25:09.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7333 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 8 13:25:09.484: INFO: Found 0 stateful pods, waiting for 3 May 8 13:25:19.489: INFO: Found 2 stateful pods, waiting for 3 May 8 13:25:29.499: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:25:29.499: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:25:29.499: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 8 13:25:29.527: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 8 13:25:39.583: INFO: Updating stateful set ss2 May 8 13:25:39.624: INFO: Waiting for Pod statefulset-7333/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 8 13:25:50.163: INFO: Found 2 stateful pods, waiting for 3 May 8 13:26:00.168: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:26:00.168: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:26:00.169: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 8 13:26:00.193: INFO: Updating stateful set ss2 May 8 13:26:00.253: INFO: Waiting for Pod statefulset-7333/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 13:26:10.262: INFO: Waiting for Pod statefulset-7333/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 8 13:26:20.280: INFO: Updating stateful set ss2 May 8 13:26:20.290: INFO: Waiting for StatefulSet statefulset-7333/ss2 to complete update May 8 13:26:20.290: INFO: Waiting for Pod statefulset-7333/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 8 13:26:30.299: INFO: Deleting all statefulset in ns statefulset-7333 May 8 13:26:30.302: INFO: Scaling statefulset ss2 to 0 May 8 13:27:10.321: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:27:10.324: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:27:10.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7333" for this suite. May 8 13:27:16.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:27:16.448: INFO: namespace statefulset-7333 deletion completed in 6.111648732s • [SLOW TEST:127.106 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:27:16.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:27:16.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d" in namespace "downward-api-9125" to be "success or failure" May 8 13:27:16.524: INFO: Pod "downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356755ms May 8 13:27:18.528: INFO: Pod "downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007451126s May 8 13:27:20.532: INFO: Pod "downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011374931s STEP: Saw pod success May 8 13:27:20.532: INFO: Pod "downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d" satisfied condition "success or failure" May 8 13:27:20.534: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d container client-container: STEP: delete the pod May 8 13:27:20.587: INFO: Waiting for pod downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d to disappear May 8 13:27:20.598: INFO: Pod downwardapi-volume-56470fd4-590b-4533-9fbe-c2b5e0b08f6d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:27:20.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9125" for this suite. May 8 13:27:26.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:27:26.709: INFO: namespace downward-api-9125 deletion completed in 6.106512939s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:27:26.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9115 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9115 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9115 May 8 13:27:26.802: INFO: Found 0 stateful pods, waiting for 1 May 8 13:27:36.807: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 8 13:27:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:27:37.099: INFO: stderr: "I0508 13:27:36.942576 1869 log.go:172] (0xc00012a790) (0xc00062e820) Create stream\nI0508 13:27:36.942627 1869 log.go:172] (0xc00012a790) (0xc00062e820) Stream added, broadcasting: 1\nI0508 13:27:36.945335 1869 log.go:172] (0xc00012a790) Reply frame received for 1\nI0508 13:27:36.945387 1869 log.go:172] (0xc00012a790) (0xc00062c1e0) Create stream\nI0508 13:27:36.945433 1869 log.go:172] (0xc00012a790) (0xc00062c1e0) Stream added, broadcasting: 3\nI0508 13:27:36.946559 1869 log.go:172] (0xc00012a790) Reply frame received for 3\nI0508 13:27:36.946624 1869 log.go:172] (0xc00012a790) (0xc00062e8c0) Create stream\nI0508 13:27:36.946649 1869 log.go:172] (0xc00012a790) (0xc00062e8c0) Stream added, broadcasting: 5\nI0508 13:27:36.947755 1869 log.go:172] (0xc00012a790) Reply frame received for 5\nI0508 13:27:37.048423 1869 log.go:172] (0xc00012a790) Data frame received for 5\nI0508 13:27:37.048457 1869 log.go:172] (0xc00062e8c0) (5) Data frame handling\nI0508 13:27:37.048477 1869 log.go:172] (0xc00062e8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:27:37.091533 1869 log.go:172] (0xc00012a790) Data frame received for 3\nI0508 13:27:37.091621 1869 log.go:172] (0xc00062c1e0) (3) Data frame handling\nI0508 13:27:37.091655 1869 log.go:172] (0xc00062c1e0) (3) Data frame sent\nI0508 13:27:37.091778 1869 log.go:172] (0xc00012a790) Data frame received for 3\nI0508 13:27:37.091812 1869 log.go:172] (0xc00062c1e0) (3) Data frame handling\nI0508 13:27:37.092690 1869 log.go:172] (0xc00012a790) Data frame received for 5\nI0508 13:27:37.092826 1869 log.go:172] (0xc00062e8c0) (5) Data frame handling\nI0508 13:27:37.094670 1869 log.go:172] (0xc00012a790) Data frame received for 1\nI0508 13:27:37.094709 1869 log.go:172] (0xc00062e820) (1) Data frame handling\nI0508 13:27:37.094753 1869 log.go:172] (0xc00062e820) (1) Data frame sent\nI0508 13:27:37.094776 1869 log.go:172] (0xc00012a790) (0xc00062e820) Stream removed, broadcasting: 1\nI0508 13:27:37.094792 1869 log.go:172] (0xc00012a790) Go away received\nI0508 13:27:37.095306 1869 log.go:172] (0xc00012a790) (0xc00062e820) Stream removed, broadcasting: 1\nI0508 13:27:37.095329 1869 log.go:172] (0xc00012a790) (0xc00062c1e0) Stream removed, broadcasting: 3\nI0508 13:27:37.095340 1869 log.go:172] (0xc00012a790) (0xc00062e8c0) Stream removed, broadcasting: 5\n" May 8 13:27:37.099: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:27:37.099: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:27:37.103: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 13:27:47.108: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 13:27:47.108: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:27:47.146: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:27:47.146: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC }] May 8 13:27:47.146: INFO: May 8 13:27:47.146: INFO: StatefulSet ss has not reached scale 3, at 1 May 8 13:27:48.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972563358s May 8 13:27:49.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967224179s May 8 13:27:50.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961657191s May 8 13:27:51.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956311801s May 8 13:27:52.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948749651s May 8 13:27:53.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.94354787s May 8 13:27:54.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.938791078s May 8 13:27:55.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93297051s May 8 13:27:56.202: INFO: Verifying statefulset ss doesn't scale past 3 for another 928.252832ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9115 May 8 13:27:57.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:27:57.498: INFO: stderr: "I0508 13:27:57.339613 1890 log.go:172] (0xc0009e20b0) (0xc00073c1e0) Create stream\nI0508 13:27:57.339688 1890 log.go:172] (0xc0009e20b0) (0xc00073c1e0) Stream added, broadcasting: 1\nI0508 13:27:57.371500 1890 log.go:172] (0xc0009e20b0) Reply frame received for 1\nI0508 13:27:57.371539 1890 log.go:172] (0xc0009e20b0) (0xc00073c280) Create stream\nI0508 13:27:57.371548 1890 log.go:172] (0xc0009e20b0) (0xc00073c280) Stream added, broadcasting: 3\nI0508 13:27:57.372388 1890 log.go:172] (0xc0009e20b0) Reply frame received for 3\nI0508 13:27:57.372414 1890 log.go:172] (0xc0009e20b0) (0xc00028a1e0) Create stream\nI0508 13:27:57.372423 1890 log.go:172] (0xc0009e20b0) (0xc00028a1e0) Stream added, broadcasting: 5\nI0508 13:27:57.373212 1890 log.go:172] (0xc0009e20b0) Reply frame received for 5\nI0508 13:27:57.492368 1890 log.go:172] (0xc0009e20b0) Data frame received for 3\nI0508 13:27:57.492521 1890 log.go:172] (0xc00073c280) (3) Data frame handling\nI0508 13:27:57.492560 1890 log.go:172] (0xc00073c280) (3) Data frame sent\nI0508 13:27:57.492575 1890 log.go:172] (0xc0009e20b0) Data frame received for 3\nI0508 13:27:57.492589 1890 log.go:172] (0xc00073c280) (3) Data frame handling\nI0508 13:27:57.492769 1890 log.go:172] (0xc0009e20b0) Data frame received for 5\nI0508 13:27:57.492794 1890 log.go:172] (0xc00028a1e0) (5) Data frame handling\nI0508 13:27:57.492815 1890 log.go:172] (0xc00028a1e0) (5) Data frame sent\nI0508 13:27:57.492836 1890 log.go:172] (0xc0009e20b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:27:57.492845 1890 log.go:172] (0xc00028a1e0) (5) Data frame handling\nI0508 13:27:57.494447 1890 log.go:172] (0xc0009e20b0) Data frame received for 1\nI0508 13:27:57.494474 1890 log.go:172] (0xc00073c1e0) (1) Data frame handling\nI0508 13:27:57.494494 1890 log.go:172] (0xc00073c1e0) (1) Data frame sent\nI0508 13:27:57.494520 1890 log.go:172] (0xc0009e20b0) (0xc00073c1e0) Stream removed, broadcasting: 1\nI0508 13:27:57.494540 1890 log.go:172] (0xc0009e20b0) Go away received\nI0508 13:27:57.494829 1890 log.go:172] (0xc0009e20b0) (0xc00073c1e0) Stream removed, broadcasting: 1\nI0508 13:27:57.494842 1890 log.go:172] (0xc0009e20b0) (0xc00073c280) Stream removed, broadcasting: 3\nI0508 13:27:57.494848 1890 log.go:172] (0xc0009e20b0) (0xc00028a1e0) Stream removed, broadcasting: 5\n" May 8 13:27:57.498: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:27:57.498: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:27:57.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:27:57.728: INFO: stderr: "I0508 13:27:57.640355 1909 log.go:172] (0xc000116dc0) (0xc0007806e0) Create stream\nI0508 13:27:57.640406 1909 log.go:172] (0xc000116dc0) (0xc0007806e0) Stream added, broadcasting: 1\nI0508 13:27:57.642677 1909 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0508 13:27:57.642729 1909 log.go:172] (0xc000116dc0) (0xc000780780) Create stream\nI0508 13:27:57.642742 1909 log.go:172] (0xc000116dc0) (0xc000780780) Stream added, broadcasting: 3\nI0508 13:27:57.643792 1909 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0508 13:27:57.643836 1909 log.go:172] (0xc000116dc0) (0xc0003a2500) Create stream\nI0508 13:27:57.643857 1909 log.go:172] (0xc000116dc0) (0xc0003a2500) Stream added, broadcasting: 5\nI0508 13:27:57.644767 1909 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0508 13:27:57.721082 1909 log.go:172] (0xc000116dc0) Data frame received for 5\nI0508 13:27:57.721221 1909 log.go:172] (0xc0003a2500) (5) Data frame handling\nI0508 13:27:57.721235 1909 log.go:172] (0xc0003a2500) (5) Data frame sent\nI0508 13:27:57.721242 1909 log.go:172] (0xc000116dc0) Data frame received for 5\nI0508 13:27:57.721246 1909 log.go:172] (0xc0003a2500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 13:27:57.721297 1909 log.go:172] (0xc000116dc0) Data frame received for 3\nI0508 13:27:57.721315 1909 log.go:172] (0xc000780780) (3) Data frame handling\nI0508 13:27:57.721340 1909 log.go:172] (0xc000780780) (3) Data frame sent\nI0508 13:27:57.721436 1909 log.go:172] (0xc000116dc0) Data frame received for 3\nI0508 13:27:57.721451 1909 log.go:172] (0xc000780780) (3) Data frame handling\nI0508 13:27:57.723387 1909 log.go:172] (0xc000116dc0) Data frame received for 1\nI0508 13:27:57.723402 1909 log.go:172] (0xc0007806e0) (1) Data frame handling\nI0508 13:27:57.723415 1909 log.go:172] (0xc0007806e0) (1) Data frame sent\nI0508 13:27:57.723461 1909 log.go:172] (0xc000116dc0) (0xc0007806e0) Stream removed, broadcasting: 1\nI0508 13:27:57.723616 1909 log.go:172] (0xc000116dc0) Go away received\nI0508 13:27:57.723737 1909 log.go:172] (0xc000116dc0) (0xc0007806e0) Stream removed, broadcasting: 1\nI0508 13:27:57.723755 1909 log.go:172] (0xc000116dc0) (0xc000780780) Stream removed, broadcasting: 3\nI0508 13:27:57.723766 1909 log.go:172] (0xc000116dc0) (0xc0003a2500) Stream removed, broadcasting: 5\n" May 8 13:27:57.728: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:27:57.728: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:27:57.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:27:57.934: INFO: stderr: "I0508 13:27:57.850251 1930 log.go:172] (0xc000116dc0) (0xc000314820) Create stream\nI0508 13:27:57.850307 1930 log.go:172] (0xc000116dc0) (0xc000314820) Stream added, broadcasting: 1\nI0508 13:27:57.856864 1930 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0508 13:27:57.856931 1930 log.go:172] (0xc000116dc0) (0xc0006d4000) Create stream\nI0508 13:27:57.856949 1930 log.go:172] (0xc000116dc0) (0xc0006d4000) Stream added, broadcasting: 3\nI0508 13:27:57.858635 1930 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0508 13:27:57.858671 1930 log.go:172] (0xc000116dc0) (0xc0003148c0) Create stream\nI0508 13:27:57.858703 1930 log.go:172] (0xc000116dc0) (0xc0003148c0) Stream added, broadcasting: 5\nI0508 13:27:57.866258 1930 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0508 13:27:57.924958 1930 log.go:172] (0xc000116dc0) Data frame received for 3\nI0508 13:27:57.925001 1930 log.go:172] (0xc0006d4000) (3) Data frame handling\nI0508 13:27:57.925021 1930 log.go:172] (0xc0006d4000) (3) Data frame sent\nI0508 13:27:57.925037 1930 log.go:172] (0xc000116dc0) Data frame received for 3\nI0508 13:27:57.925054 1930 log.go:172] (0xc0006d4000) (3) Data frame handling\nI0508 13:27:57.925072 1930 log.go:172] (0xc000116dc0) Data frame received for 5\nI0508 13:27:57.925085 1930 log.go:172] (0xc0003148c0) (5) Data frame handling\nI0508 13:27:57.925095 1930 log.go:172] (0xc0003148c0) (5) Data frame sent\nI0508 13:27:57.925108 1930 log.go:172] (0xc000116dc0) Data frame received for 5\nI0508 13:27:57.925306 1930 log.go:172] (0xc0003148c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 13:27:57.928308 1930 log.go:172] (0xc000116dc0) Data frame received for 1\nI0508 13:27:57.928354 1930 log.go:172] (0xc000314820) (1) Data frame handling\nI0508 13:27:57.928371 1930 log.go:172] (0xc000314820) (1) Data frame sent\nI0508 13:27:57.928390 1930 log.go:172] (0xc000116dc0) (0xc000314820) Stream removed, broadcasting: 1\nI0508 13:27:57.928934 1930 log.go:172] (0xc000116dc0) (0xc000314820) Stream removed, broadcasting: 1\nI0508 13:27:57.928980 1930 log.go:172] (0xc000116dc0) (0xc0006d4000) Stream removed, broadcasting: 3\nI0508 13:27:57.928997 1930 log.go:172] (0xc000116dc0) (0xc0003148c0) Stream removed, broadcasting: 5\n" May 8 13:27:57.934: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:27:57.934: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:27:57.938: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 8 13:28:07.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:28:07.943: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:28:07.943: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 8 13:28:07.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:28:08.182: INFO: stderr: "I0508 13:28:08.086351 1950 log.go:172] (0xc000a6a420) (0xc000988640) Create stream\nI0508 13:28:08.086408 1950 log.go:172] (0xc000a6a420) (0xc000988640) Stream added, broadcasting: 1\nI0508 13:28:08.088784 1950 log.go:172] (0xc000a6a420) Reply frame received for 1\nI0508 13:28:08.088828 1950 log.go:172] (0xc000a6a420) (0xc0008f4000) Create stream\nI0508 13:28:08.088841 1950 log.go:172] (0xc000a6a420) (0xc0008f4000) Stream added, broadcasting: 3\nI0508 13:28:08.090337 1950 log.go:172] (0xc000a6a420) Reply frame received for 3\nI0508 13:28:08.090388 1950 log.go:172] (0xc000a6a420) (0xc0009886e0) Create stream\nI0508 13:28:08.090409 1950 log.go:172] (0xc000a6a420) (0xc0009886e0) Stream added, broadcasting: 5\nI0508 13:28:08.091521 1950 log.go:172] (0xc000a6a420) Reply frame received for 5\nI0508 13:28:08.176286 1950 log.go:172] (0xc000a6a420) Data frame received for 5\nI0508 13:28:08.176328 1950 log.go:172] (0xc0009886e0) (5) Data frame handling\nI0508 13:28:08.176338 1950 log.go:172] (0xc0009886e0) (5) Data frame sent\nI0508 13:28:08.176346 1950 log.go:172] (0xc000a6a420) Data frame received for 5\nI0508 13:28:08.176351 1950 log.go:172] (0xc0009886e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:28:08.176370 1950 log.go:172] (0xc000a6a420) Data frame received for 3\nI0508 13:28:08.176377 1950 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0508 13:28:08.176384 1950 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0508 13:28:08.176390 1950 log.go:172] (0xc000a6a420) Data frame received for 3\nI0508 13:28:08.176395 1950 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0508 13:28:08.177975 1950 log.go:172] (0xc000a6a420) Data frame received for 1\nI0508 13:28:08.177995 1950 log.go:172] (0xc000988640) (1) Data frame handling\nI0508 13:28:08.178005 1950 log.go:172] (0xc000988640) (1) Data frame sent\nI0508 13:28:08.178066 1950 log.go:172] (0xc000a6a420) (0xc000988640) Stream removed, broadcasting: 1\nI0508 13:28:08.178335 1950 log.go:172] (0xc000a6a420) (0xc000988640) Stream removed, broadcasting: 1\nI0508 13:28:08.178351 1950 log.go:172] (0xc000a6a420) (0xc0008f4000) Stream removed, broadcasting: 3\nI0508 13:28:08.178457 1950 log.go:172] (0xc000a6a420) (0xc0009886e0) Stream removed, broadcasting: 5\n" May 8 13:28:08.182: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:28:08.182: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:28:08.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:28:08.419: INFO: stderr: "I0508 13:28:08.310168 1971 log.go:172] (0xc000984370) (0xc00083e640) Create stream\nI0508 13:28:08.310222 1971 log.go:172] (0xc000984370) (0xc00083e640) Stream added, broadcasting: 1\nI0508 13:28:08.312092 1971 log.go:172] (0xc000984370) Reply frame received for 1\nI0508 13:28:08.312127 1971 log.go:172] (0xc000984370) (0xc000926000) Create stream\nI0508 13:28:08.312135 1971 log.go:172] (0xc000984370) (0xc000926000) Stream added, broadcasting: 3\nI0508 13:28:08.312979 1971 log.go:172] (0xc000984370) Reply frame received for 3\nI0508 13:28:08.313021 1971 log.go:172] (0xc000984370) (0xc0006221e0) Create stream\nI0508 13:28:08.313035 1971 log.go:172] (0xc000984370) (0xc0006221e0) Stream added, broadcasting: 5\nI0508 13:28:08.314154 1971 log.go:172] (0xc000984370) Reply frame received for 5\nI0508 13:28:08.381049 1971 log.go:172] (0xc000984370) Data frame received for 5\nI0508 13:28:08.381090 1971 log.go:172] (0xc0006221e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:28:08.381334 1971 log.go:172] (0xc0006221e0) (5) Data frame sent\nI0508 13:28:08.411402 1971 log.go:172] (0xc000984370) Data frame received for 3\nI0508 13:28:08.411440 1971 log.go:172] (0xc000926000) (3) Data frame handling\nI0508 13:28:08.411477 1971 log.go:172] (0xc000926000) (3) Data frame sent\nI0508 13:28:08.411498 1971 log.go:172] (0xc000984370) Data frame received for 3\nI0508 13:28:08.411530 1971 log.go:172] (0xc000926000) (3) Data frame handling\nI0508 13:28:08.411613 1971 log.go:172] (0xc000984370) Data frame received for 5\nI0508 13:28:08.411636 1971 log.go:172] (0xc0006221e0) (5) Data frame handling\nI0508 13:28:08.413809 1971 log.go:172] (0xc000984370) Data frame received for 1\nI0508 13:28:08.413831 1971 log.go:172] (0xc00083e640) (1) Data frame handling\nI0508 13:28:08.413842 1971 log.go:172] (0xc00083e640) (1) Data frame sent\nI0508 13:28:08.413854 1971 log.go:172] (0xc000984370) (0xc00083e640) Stream removed, broadcasting: 1\nI0508 13:28:08.414226 1971 log.go:172] (0xc000984370) (0xc00083e640) Stream removed, broadcasting: 1\nI0508 13:28:08.414248 1971 log.go:172] (0xc000984370) (0xc000926000) Stream removed, broadcasting: 3\nI0508 13:28:08.414257 1971 log.go:172] (0xc000984370) (0xc0006221e0) Stream removed, broadcasting: 5\n" May 8 13:28:08.419: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:28:08.419: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:28:08.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:28:08.665: INFO: stderr: "I0508 13:28:08.544759 1991 log.go:172] (0xc000a28420) (0xc00090a6e0) Create stream\nI0508 13:28:08.544807 1991 log.go:172] (0xc000a28420) (0xc00090a6e0) Stream added, broadcasting: 1\nI0508 13:28:08.547295 1991 log.go:172] (0xc000a28420) Reply frame received for 1\nI0508 13:28:08.547336 1991 log.go:172] (0xc000a28420) (0xc0005281e0) Create stream\nI0508 13:28:08.547351 1991 log.go:172] (0xc000a28420) (0xc0005281e0) Stream added, broadcasting: 3\nI0508 13:28:08.548402 1991 log.go:172] (0xc000a28420) Reply frame received for 3\nI0508 13:28:08.548419 1991 log.go:172] (0xc000a28420) (0xc00090a780) Create stream\nI0508 13:28:08.548425 1991 log.go:172] (0xc000a28420) (0xc00090a780) Stream added, broadcasting: 5\nI0508 13:28:08.549734 1991 log.go:172] (0xc000a28420) Reply frame received for 5\nI0508 13:28:08.625364 1991 log.go:172] (0xc000a28420) Data frame received for 5\nI0508 13:28:08.625393 1991 log.go:172] (0xc00090a780) (5) Data frame handling\nI0508 13:28:08.625430 1991 log.go:172] (0xc00090a780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:28:08.658061 1991 log.go:172] (0xc000a28420) Data frame received for 3\nI0508 13:28:08.658080 1991 log.go:172] (0xc0005281e0) (3) Data frame handling\nI0508 13:28:08.658087 1991 log.go:172] (0xc0005281e0) (3) Data frame sent\nI0508 13:28:08.658092 1991 log.go:172] (0xc000a28420) Data frame received for 3\nI0508 13:28:08.658097 1991 log.go:172] (0xc0005281e0) (3) Data frame handling\nI0508 13:28:08.658119 1991 log.go:172] (0xc000a28420) Data frame received for 5\nI0508 13:28:08.658125 1991 log.go:172] (0xc00090a780) (5) Data frame handling\nI0508 13:28:08.660094 1991 log.go:172] (0xc000a28420) Data frame received for 1\nI0508 13:28:08.660105 1991 log.go:172] (0xc00090a6e0) (1) Data frame handling\nI0508 13:28:08.660111 1991 log.go:172] (0xc00090a6e0) (1) Data frame sent\nI0508 13:28:08.660118 1991 log.go:172] (0xc000a28420) (0xc00090a6e0) Stream removed, broadcasting: 1\nI0508 13:28:08.660216 1991 log.go:172] (0xc000a28420) Go away received\nI0508 13:28:08.660362 1991 log.go:172] (0xc000a28420) (0xc00090a6e0) Stream removed, broadcasting: 1\nI0508 13:28:08.660378 1991 log.go:172] (0xc000a28420) (0xc0005281e0) Stream removed, broadcasting: 3\nI0508 13:28:08.660384 1991 log.go:172] (0xc000a28420) (0xc00090a780) Stream removed, broadcasting: 5\n" May 8 13:28:08.665: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:28:08.665: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:28:08.665: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:28:08.678: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 8 13:28:18.687: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 13:28:18.687: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 13:28:18.687: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 13:28:18.770: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:18.770: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC }] May 8 13:28:18.770: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:18.770: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:18.770: INFO: May 8 13:28:18.770: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 13:28:19.775: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:19.775: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC }] May 8 13:28:19.775: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:19.775: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:19.775: INFO: May 8 13:28:19.775: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 13:28:20.780: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:20.780: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC }] May 8 13:28:20.780: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:20.780: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:20.780: INFO: May 8 13:28:20.780: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 13:28:21.786: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:21.786: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:26 +0000 UTC }] May 8 13:28:21.786: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:21.786: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:21.786: INFO: May 8 13:28:21.786: INFO: StatefulSet ss has not reached scale 0, at 3 May 8 13:28:22.847: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:22.847: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:22.847: INFO: May 8 13:28:22.847: INFO: StatefulSet ss has not reached scale 0, at 1 May 8 13:28:23.851: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:23.851: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:23.851: INFO: May 8 13:28:23.851: INFO: StatefulSet ss has not reached scale 0, at 1 May 8 13:28:24.856: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:24.856: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:24.856: INFO: May 8 13:28:24.856: INFO: StatefulSet ss has not reached scale 0, at 1 May 8 13:28:25.861: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:25.861: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:25.861: INFO: May 8 13:28:25.861: INFO: StatefulSet ss has not reached scale 0, at 1 May 8 13:28:26.864: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:26.864: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:26.864: INFO: May 8 13:28:26.864: INFO: StatefulSet ss has not reached scale 0, at 1 May 8 13:28:27.868: INFO: POD NODE PHASE GRACE CONDITIONS May 8 13:28:27.868: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:28:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:27:47 +0000 UTC }] May 8 13:28:27.868: INFO: May 8 13:28:27.868: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9115 May 8 13:28:28.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:28:31.350: INFO: rc: 1 May 8 13:28:31.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002717d70 exit status 1 true [0xc002718410 0xc002718428 0xc002718468] [0xc002718410 0xc002718428 0xc002718468] [0xc002718420 0xc002718450] [0xba70e0 0xba70e0] 0xc003259c20 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 8 13:28:41.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:28:41.448: INFO: rc: 1 May 8 13:28:41.448: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027b52f0 exit status 1 true [0xc000970ce0 0xc000970d68 0xc000970dd0] [0xc000970ce0 0xc000970d68 0xc000970dd0] [0xc000970d60 0xc000970da8] [0xba70e0 0xba70e0] 0xc002c26900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:28:51.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:28:51.554: INFO: rc: 1 May 8 13:28:51.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002717e30 exit status 1 true [0xc002718480 0xc0027184c8 0xc002718510] [0xc002718480 0xc0027184c8 0xc002718510] [0xc0027184a8 0xc002718500] [0xba70e0 0xba70e0] 0xc003259f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:01.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:01.649: INFO: rc: 1 May 8 13:29:01.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0027b5410 exit status 1 true [0xc000970de8 0xc000970eb0 0xc000970f20] [0xc000970de8 0xc000970eb0 0xc000970f20] [0xc000970e68 0xc000970ef0] [0xba70e0 0xba70e0] 0xc002c271a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:11.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:11.734: INFO: rc: 1 May 8 13:29:11.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a88090 exit status 1 true [0xc00037f4e0 0xc00037f528 0xc00037f5e8] [0xc00037f4e0 0xc00037f528 0xc00037f5e8] [0xc00037f518 0xc00037f5d8] [0xba70e0 0xba70e0] 0xc003258960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:21.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:21.855: INFO: rc: 1 May 8 13:29:21.855: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ca40c0 exit status 1 true [0xc000dac120 0xc000dac270 0xc000dac3b8] [0xc000dac120 0xc000dac270 0xc000dac3b8] [0xc000dac260 0xc000dac308] [0xba70e0 0xba70e0] 0xc003018300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:31.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:31.954: INFO: rc: 1 May 8 13:29:31.954: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ca4180 exit status 1 true [0xc000dac470 0xc000dac5e8 0xc000dac730] [0xc000dac470 0xc000dac5e8 0xc000dac730] [0xc000dac560 0xc000dac6f0] [0xba70e0 0xba70e0] 0xc003018600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:41.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:42.063: INFO: rc: 1 May 8 13:29:42.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028ce0c0 exit status 1 true [0xc000906028 0xc000906130 0xc0009062a8] [0xc000906028 0xc000906130 0xc0009062a8] [0xc000906110 0xc000906278] [0xba70e0 0xba70e0] 0xc00268c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:29:52.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:29:52.154: INFO: rc: 1 May 8 13:29:52.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a881b0 exit status 1 true [0xc00037f5f0 0xc00037f630 0xc00037f670] [0xc00037f5f0 0xc00037f630 0xc00037f670] [0xc00037f620 0xc00037f668] [0xba70e0 0xba70e0] 0xc003258c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:02.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:02.252: INFO: rc: 1 May 8 13:30:02.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002ca4270 exit status 1 true [0xc000dac760 0xc000dac8f8 0xc000dac9b0] [0xc000dac760 0xc000dac8f8 0xc000dac9b0] [0xc000dac8a0 0xc000dac950] [0xba70e0 0xba70e0] 0xc003018900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:12.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:12.356: INFO: rc: 1 May 8 13:30:12.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d80c0 exit status 1 true [0xc000010388 0xc00053de38 0xc002718000] [0xc000010388 0xc00053de38 0xc002718000] [0xc00053dd00 0xc00053df90] [0xba70e0 0xba70e0] 0xc002b28360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:22.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:22.456: INFO: rc: 1 May 8 13:30:22.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d81b0 exit status 1 true [0xc002718008 0xc002718020 0xc002718038] [0xc002718008 0xc002718020 0xc002718038] [0xc002718018 0xc002718030] [0xba70e0 0xba70e0] 0xc002b28960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:32.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:32.569: INFO: rc: 1 May 8 13:30:32.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d8270 exit status 1 true [0xc002718040 0xc002718058 0xc002718070] [0xc002718040 0xc002718058 0xc002718070] [0xc002718050 0xc002718068] [0xba70e0 0xba70e0] 0xc002b28d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:42.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:42.664: INFO: rc: 1 May 8 13:30:42.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a88270 exit status 1 true [0xc00037f678 0xc00037f6a0 0xc00037f6e8] [0xc00037f678 0xc00037f6a0 0xc00037f6e8] [0xc00037f690 0xc00037f6c8] [0xba70e0 0xba70e0] 0xc003258f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:30:52.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:30:52.761: INFO: rc: 1 May 8 13:30:52.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d8360 exit status 1 true [0xc002718078 0xc002718090 0xc0027180a8] [0xc002718078 0xc002718090 0xc0027180a8] [0xc002718088 0xc0027180a0] [0xba70e0 0xba70e0] 0xc002b29920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:02.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:02.855: INFO: rc: 1 May 8 13:31:02.855: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d8450 exit status 1 true [0xc0027180b0 0xc0027180c8 0xc0027180e0] [0xc0027180b0 0xc0027180c8 0xc0027180e0] [0xc0027180c0 0xc0027180d8] [0xba70e0 0xba70e0] 0xc002b29e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:12.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:12.962: INFO: rc: 1 May 8 13:31:12.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc80c0 exit status 1 true [0xc001c36008 0xc001c36020 0xc001c36038] [0xc001c36008 0xc001c36020 0xc001c36038] [0xc001c36018 0xc001c36030] [0xba70e0 0xba70e0] 0xc0025ac240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:22.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:23.059: INFO: rc: 1 May 8 13:31:23.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc8180 exit status 1 true [0xc00053dd00 0xc00053df90 0xc000906028] [0xc00053dd00 0xc00053df90 0xc000906028] [0xc00053ded8 0xc000010388] [0xba70e0 0xba70e0] 0xc00268c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:33.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:33.162: INFO: rc: 1 May 8 13:31:33.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028ce0f0 exit status 1 true [0xc001c36040 0xc001c36058 0xc001c36070] [0xc001c36040 0xc001c36058 0xc001c36070] [0xc001c36050 0xc001c36068] [0xba70e0 0xba70e0] 0xc0025ac540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:43.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:43.268: INFO: rc: 1 May 8 13:31:43.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028ce1e0 exit status 1 true [0xc001c36078 0xc001c36090 0xc001c360a8] [0xc001c36078 0xc001c36090 0xc001c360a8] [0xc001c36088 0xc001c360a0] [0xba70e0 0xba70e0] 0xc0025acea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:31:53.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:31:53.378: INFO: rc: 1 May 8 13:31:53.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc8270 exit status 1 true [0xc0009060a8 0xc000906160 0xc0009062b0] [0xc0009060a8 0xc000906160 0xc0009062b0] [0xc000906130 0xc0009062a8] [0xba70e0 0xba70e0] 0xc00268c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:03.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:03.472: INFO: rc: 1 May 8 13:32:03.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc8330 exit status 1 true [0xc0009062f8 0xc000906480 0xc000906520] [0xc0009062f8 0xc000906480 0xc000906520] [0xc000906440 0xc0009064f0] [0xba70e0 0xba70e0] 0xc00268c8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:13.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:13.571: INFO: rc: 1 May 8 13:32:13.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc83f0 exit status 1 true [0xc000906550 0xc000906600 0xc000906698] [0xc000906550 0xc000906600 0xc000906698] [0xc0009065e0 0xc000906668] [0xba70e0 0xba70e0] 0xc00268cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:23.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:23.677: INFO: rc: 1 May 8 13:32:23.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d8150 exit status 1 true [0xc002718000 0xc002718018 0xc002718030] [0xc002718000 0xc002718018 0xc002718030] [0xc002718010 0xc002718028] [0xba70e0 0xba70e0] 0xc002b28360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:33.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:33.776: INFO: rc: 1 May 8 13:32:33.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a880f0 exit status 1 true [0xc00037f478 0xc00037f518 0xc00037f5d8] [0xc00037f478 0xc00037f518 0xc00037f5d8] [0xc00037f508 0xc00037f5a8] [0xba70e0 0xba70e0] 0xc003258960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:43.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:43.869: INFO: rc: 1 May 8 13:32:43.869: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002bc84b0 exit status 1 true [0xc000906718 0xc000906778 0xc0009069a8] [0xc000906718 0xc000906778 0xc0009069a8] [0xc000906760 0xc000906918] [0xba70e0 0xba70e0] 0xc00268d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:32:53.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:32:54.038: INFO: rc: 1 May 8 13:32:54.039: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026d82d0 exit status 1 true [0xc002718038 0xc002718050 0xc002718068] [0xc002718038 0xc002718050 0xc002718068] [0xc002718048 0xc002718060] [0xba70e0 0xba70e0] 0xc002b28960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:33:04.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:33:04.130: INFO: rc: 1 May 8 13:33:04.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0028ce2d0 exit status 1 true [0xc001c360b0 0xc001c360c8 0xc001c360e0] [0xc001c360b0 0xc001c360c8 0xc001c360e0] [0xc001c360c0 0xc001c360d8] [0xba70e0 0xba70e0] 0xc0025ad320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:33:14.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:33:14.228: INFO: rc: 1 May 8 13:33:14.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a88090 exit status 1 true [0xc000186000 0xc00053ded8 0xc00037f4e0] [0xc000186000 0xc00053ded8 0xc00037f4e0] [0xc00053de38 0xc00037f478] [0xba70e0 0xba70e0] 0xc003258960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:33:24.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:33:24.333: INFO: rc: 1 May 8 13:33:24.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a881b0 exit status 1 true [0xc00037f508 0xc00037f5a8 0xc00037f5f0] [0xc00037f508 0xc00037f5a8 0xc00037f5f0] [0xc00037f528 0xc00037f5e8] [0xba70e0 0xba70e0] 0xc003258c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 8 13:33:34.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9115 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:33:34.426: INFO: rc: 1 May 8 13:33:34.426: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: May 8 13:33:34.426: INFO: Scaling statefulset ss to 0 May 8 13:33:34.434: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 8 13:33:34.437: INFO: Deleting all statefulset in ns statefulset-9115 May 8 13:33:34.439: INFO: Scaling statefulset ss to 0 May 8 13:33:34.447: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:33:34.449: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:33:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9115" for this suite. May 8 13:33:40.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:33:40.566: INFO: namespace statefulset-9115 deletion completed in 6.098099659s • [SLOW TEST:373.857 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:33:40.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3e40aea0-53e0-4570-8332-163603228652 STEP: Creating a pod to test consume secrets May 8 13:33:40.650: INFO: Waiting up to 5m0s for pod "pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d" in namespace "secrets-2201" to be "success or failure" May 8 13:33:40.689: INFO: Pod "pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.944493ms May 8 13:33:42.694: INFO: Pod "pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043323543s May 8 13:33:44.698: INFO: Pod "pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047909293s STEP: Saw pod success May 8 13:33:44.698: INFO: Pod "pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d" satisfied condition "success or failure" May 8 13:33:44.701: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d container secret-env-test: STEP: delete the pod May 8 13:33:44.805: INFO: Waiting for pod pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d to disappear May 8 13:33:44.816: INFO: Pod pod-secrets-9f3ad735-39ae-4d0f-ab23-02c7da5d7a9d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:33:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2201" for this suite. May 8 13:33:50.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:33:50.949: INFO: namespace secrets-2201 deletion completed in 6.129182963s • [SLOW TEST:10.383 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:33:50.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 8 13:33:51.000: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:33:57.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2154" for this suite. May 8 13:34:03.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:34:03.799: INFO: namespace init-container-2154 deletion completed in 6.077695077s • [SLOW TEST:12.850 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:34:03.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-36b6d633-54e8-4ccb-a027-0bb7e1b55a7c STEP: Creating a pod to test consume configMaps May 8 13:34:03.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-484d0d18-3359-446b-bc46-166628b34526" in namespace "configmap-2869" to be "success or failure" May 8 13:34:03.913: INFO: Pod "pod-configmaps-484d0d18-3359-446b-bc46-166628b34526": Phase="Pending", Reason="", readiness=false. Elapsed: 5.012173ms May 8 13:34:05.929: INFO: Pod "pod-configmaps-484d0d18-3359-446b-bc46-166628b34526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02069182s May 8 13:34:07.933: INFO: Pod "pod-configmaps-484d0d18-3359-446b-bc46-166628b34526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024702733s STEP: Saw pod success May 8 13:34:07.933: INFO: Pod "pod-configmaps-484d0d18-3359-446b-bc46-166628b34526" satisfied condition "success or failure" May 8 13:34:07.936: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-484d0d18-3359-446b-bc46-166628b34526 container configmap-volume-test: STEP: delete the pod May 8 13:34:07.955: INFO: Waiting for pod pod-configmaps-484d0d18-3359-446b-bc46-166628b34526 to disappear May 8 13:34:08.007: INFO: Pod pod-configmaps-484d0d18-3359-446b-bc46-166628b34526 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:34:08.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2869" for this suite. May 8 13:34:14.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:34:14.104: INFO: namespace configmap-2869 deletion completed in 6.093786801s • [SLOW TEST:10.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:34:14.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:34:18.352: INFO: Waiting up to 5m0s for pod "client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57" in namespace "pods-1400" to be "success or failure" May 8 13:34:18.369: INFO: Pod "client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57": Phase="Pending", Reason="", readiness=false. Elapsed: 16.83874ms May 8 13:34:20.411: INFO: Pod "client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05925587s May 8 13:34:22.416: INFO: Pod "client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064413685s STEP: Saw pod success May 8 13:34:22.416: INFO: Pod "client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57" satisfied condition "success or failure" May 8 13:34:22.422: INFO: Trying to get logs from node iruya-worker pod client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57 container env3cont: STEP: delete the pod May 8 13:34:22.477: INFO: Waiting for pod client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57 to disappear May 8 13:34:22.540: INFO: Pod client-envvars-42eeb0d5-1002-42bf-ad86-d90e669e5b57 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:34:22.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1400" for this suite. May 8 13:35:04.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:35:04.666: INFO: namespace pods-1400 deletion completed in 42.122585457s • [SLOW TEST:50.562 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:35:04.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d17a1e75-c6a4-40ce-adf8-52814e939a00 STEP: Creating a pod to test consume secrets May 8 13:35:04.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18" in namespace "projected-4460" to be "success or failure" May 8 13:35:04.759: INFO: Pod "pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18": Phase="Pending", Reason="", readiness=false. Elapsed: 12.628258ms May 8 13:35:06.763: INFO: Pod "pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016631067s May 8 13:35:08.768: INFO: Pod "pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021182718s STEP: Saw pod success May 8 13:35:08.768: INFO: Pod "pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18" satisfied condition "success or failure" May 8 13:35:08.772: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18 container projected-secret-volume-test: STEP: delete the pod May 8 13:35:08.810: INFO: Waiting for pod pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18 to disappear May 8 13:35:08.849: INFO: Pod pod-projected-secrets-540d9d21-010b-4062-b3a0-66aed48dac18 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:35:08.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4460" for this suite. May 8 13:35:14.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:35:14.945: INFO: namespace projected-4460 deletion completed in 6.093021418s • [SLOW TEST:10.278 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:35:14.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 8 13:35:15.030: INFO: Waiting up to 5m0s for pod "downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3" in namespace "downward-api-7758" to be "success or failure" May 8 13:35:15.046: INFO: Pod "downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.960843ms May 8 13:35:17.080: INFO: Pod "downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049590865s May 8 13:35:19.083: INFO: Pod "downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05306813s STEP: Saw pod success May 8 13:35:19.083: INFO: Pod "downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3" satisfied condition "success or failure" May 8 13:35:19.086: INFO: Trying to get logs from node iruya-worker pod downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3 container dapi-container: STEP: delete the pod May 8 13:35:19.107: INFO: Waiting for pod downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3 to disappear May 8 13:35:19.111: INFO: Pod downward-api-7b89302c-0b04-4264-8b5f-e4835fb5aee3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:35:19.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7758" for this suite. May 8 13:35:25.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:35:25.219: INFO: namespace downward-api-7758 deletion completed in 6.10413416s • [SLOW TEST:10.273 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:35:25.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 8 13:35:25.542: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7521" to be "success or failure" May 8 13:35:25.549: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809165ms May 8 13:35:27.727: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184695351s May 8 13:35:29.731: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188609854s May 8 13:35:31.735: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192780555s STEP: Saw pod success May 8 13:35:31.735: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 8 13:35:31.739: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 8 13:35:31.781: INFO: Waiting for pod pod-host-path-test to disappear May 8 13:35:31.819: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:35:31.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7521" for this suite. May 8 13:35:37.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:35:37.900: INFO: namespace hostpath-7521 deletion completed in 6.077780289s • [SLOW TEST:12.681 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:35:37.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:35:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7038" for this suite. May 8 13:36:05.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:36:05.256: INFO: namespace replication-controller-7038 deletion completed in 22.134290855s • [SLOW TEST:27.355 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:36:05.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:36:05.319: INFO: Creating ReplicaSet my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5 May 8 13:36:05.355: INFO: Pod name my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5: Found 0 pods out of 1 May 8 13:36:10.360: INFO: Pod name my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5: Found 1 pods out of 1 May 8 13:36:10.360: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5" is running May 8 13:36:10.363: INFO: Pod "my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5-2f6rr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 13:36:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 13:36:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 13:36:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 13:36:05 +0000 UTC Reason: Message:}]) May 8 13:36:10.363: INFO: Trying to dial the pod May 8 13:36:15.373: INFO: Controller my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5: Got expected result from replica 1 [my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5-2f6rr]: "my-hostname-basic-7f474f90-f3c5-4e3b-a9ff-e0b3c648dce5-2f6rr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:36:15.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5230" for this suite. May 8 13:36:21.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:36:21.476: INFO: namespace replicaset-5230 deletion completed in 6.099958626s • [SLOW TEST:16.219 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:36:21.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 8 13:36:21.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 8 13:36:21.630: INFO: stderr: "" May 8 13:36:21.630: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:36:21.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9429" for this suite. May 8 13:36:27.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:36:27.717: INFO: namespace kubectl-9429 deletion completed in 6.083419383s • [SLOW TEST:6.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:36:27.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-91fa6f6d-0b9b-451e-8786-3a53f3e05575 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-91fa6f6d-0b9b-451e-8786-3a53f3e05575 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:37:50.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7850" for this suite. May 8 13:38:12.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:38:12.395: INFO: namespace projected-7850 deletion completed in 22.109364726s • [SLOW TEST:104.677 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:38:12.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 8 13:38:13.041: INFO: created pod pod-service-account-defaultsa May 8 13:38:13.041: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 8 13:38:13.049: INFO: created pod pod-service-account-mountsa May 8 13:38:13.049: INFO: pod pod-service-account-mountsa service account token volume mount: true May 8 13:38:13.055: INFO: created pod pod-service-account-nomountsa May 8 13:38:13.055: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 8 13:38:13.082: INFO: created pod pod-service-account-defaultsa-mountspec May 8 13:38:13.082: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 8 13:38:13.097: INFO: created pod pod-service-account-mountsa-mountspec May 8 13:38:13.097: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 8 13:38:13.165: INFO: created pod pod-service-account-nomountsa-mountspec May 8 13:38:13.165: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 8 13:38:13.199: INFO: created pod pod-service-account-defaultsa-nomountspec May 8 13:38:13.199: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 8 13:38:13.233: INFO: created pod pod-service-account-mountsa-nomountspec May 8 13:38:13.233: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 8 13:38:13.292: INFO: created pod pod-service-account-nomountsa-nomountspec May 8 13:38:13.292: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:38:13.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5369" for this suite. May 8 13:38:41.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:38:41.564: INFO: namespace svcaccounts-5369 deletion completed in 28.231147094s • [SLOW TEST:29.169 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:38:41.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:38:41.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022" in namespace "downward-api-8820" to be "success or failure" May 8 13:38:41.720: INFO: Pod "downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108393ms May 8 13:38:43.723: INFO: Pod "downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009668014s May 8 13:38:45.727: INFO: Pod "downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013115865s STEP: Saw pod success May 8 13:38:45.727: INFO: Pod "downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022" satisfied condition "success or failure" May 8 13:38:45.729: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022 container client-container: STEP: delete the pod May 8 13:38:45.765: INFO: Waiting for pod downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022 to disappear May 8 13:38:45.769: INFO: Pod downwardapi-volume-6d3973f0-a5aa-4768-b276-aa64a58f9022 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:38:45.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8820" for this suite. May 8 13:38:51.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:38:51.866: INFO: namespace downward-api-8820 deletion completed in 6.094153917s • [SLOW TEST:10.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:38:51.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 8 13:38:51.961: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715150,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 13:38:51.961: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715150,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 8 13:39:01.968: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715170,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 13:39:01.968: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715170,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 8 13:39:11.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715190,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 13:39:11.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715190,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 8 13:39:21.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715211,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 13:39:21.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-a,UID:873eecf3-1ac8-4006-8998-6b2c03726fcd,ResourceVersion:9715211,Generation:0,CreationTimestamp:2020-05-08 13:38:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 8 13:39:31.993: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-b,UID:86e05788-0ab1-4cc6-ac54-ee52d847e842,ResourceVersion:9715233,Generation:0,CreationTimestamp:2020-05-08 13:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 13:39:31.993: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-b,UID:86e05788-0ab1-4cc6-ac54-ee52d847e842,ResourceVersion:9715233,Generation:0,CreationTimestamp:2020-05-08 13:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 8 13:39:42.000: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-b,UID:86e05788-0ab1-4cc6-ac54-ee52d847e842,ResourceVersion:9715253,Generation:0,CreationTimestamp:2020-05-08 13:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 13:39:42.001: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9473,SelfLink:/api/v1/namespaces/watch-9473/configmaps/e2e-watch-test-configmap-b,UID:86e05788-0ab1-4cc6-ac54-ee52d847e842,ResourceVersion:9715253,Generation:0,CreationTimestamp:2020-05-08 13:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:39:52.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9473" for this suite. May 8 13:39:58.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:39:58.098: INFO: namespace watch-9473 deletion completed in 6.092057752s • [SLOW TEST:66.232 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:39:58.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 8 13:39:58.230: INFO: Waiting up to 5m0s for pod "client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f" in namespace "containers-1041" to be "success or failure" May 8 13:39:58.242: INFO: Pod "client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.977144ms May 8 13:40:00.245: INFO: Pod "client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015237874s May 8 13:40:02.249: INFO: Pod "client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019125689s STEP: Saw pod success May 8 13:40:02.249: INFO: Pod "client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f" satisfied condition "success or failure" May 8 13:40:02.252: INFO: Trying to get logs from node iruya-worker2 pod client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f container test-container: STEP: delete the pod May 8 13:40:02.275: INFO: Waiting for pod client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f to disappear May 8 13:40:02.279: INFO: Pod client-containers-62b7f712-3991-42fa-9d51-c1d31c7a6b3f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:40:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1041" for this suite. May 8 13:40:08.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:40:08.418: INFO: namespace containers-1041 deletion completed in 6.135861749s • [SLOW TEST:10.319 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:40:08.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6002/secret-test-da7e846b-762f-49eb-98b6-12096e4f1387 STEP: Creating a pod to test consume secrets May 8 13:40:08.552: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb" in namespace "secrets-6002" to be "success or failure" May 8 13:40:08.555: INFO: Pod "pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919186ms May 8 13:40:10.558: INFO: Pod "pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006381942s May 8 13:40:12.563: INFO: Pod "pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010589435s STEP: Saw pod success May 8 13:40:12.563: INFO: Pod "pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb" satisfied condition "success or failure" May 8 13:40:12.565: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb container env-test: STEP: delete the pod May 8 13:40:12.678: INFO: Waiting for pod pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb to disappear May 8 13:40:12.735: INFO: Pod pod-configmaps-4f632f2b-2b68-4bf1-af7d-a958de743fcb no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:40:12.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6002" for this suite. May 8 13:40:18.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:40:18.872: INFO: namespace secrets-6002 deletion completed in 6.132746946s • [SLOW TEST:10.453 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:40:18.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-75bd6abc-8d2a-4e90-84d3-87e2bf54814f STEP: Creating secret with name s-test-opt-upd-e84e54c3-e9b0-46c3-b6bb-b7a5811ccd4f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-75bd6abc-8d2a-4e90-84d3-87e2bf54814f STEP: Updating secret s-test-opt-upd-e84e54c3-e9b0-46c3-b6bb-b7a5811ccd4f STEP: Creating secret with name s-test-opt-create-fd0447d5-88c3-4b3a-9867-da522a84eebc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:41:35.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1017" for this suite. May 8 13:41:57.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:41:57.469: INFO: namespace projected-1017 deletion completed in 22.086180908s • [SLOW TEST:98.597 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:41:57.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 13:41:57.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8145' May 8 13:42:00.138: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 13:42:00.138: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 8 13:42:00.156: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 8 13:42:00.166: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 8 13:42:00.200: INFO: scanned /root for discovery docs: May 8 13:42:00.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8145' May 8 13:42:16.006: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 8 13:42:16.006: INFO: stdout: "Created e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72\nScaling up e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 8 13:42:16.006: INFO: stdout: "Created e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72\nScaling up e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 8 13:42:16.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8145' May 8 13:42:16.110: INFO: stderr: "" May 8 13:42:16.110: INFO: stdout: "e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72-l4llb " May 8 13:42:16.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72-l4llb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8145' May 8 13:42:16.215: INFO: stderr: "" May 8 13:42:16.215: INFO: stdout: "true" May 8 13:42:16.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72-l4llb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8145' May 8 13:42:16.320: INFO: stderr: "" May 8 13:42:16.320: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 8 13:42:16.320: INFO: e2e-test-nginx-rc-58da52af2be4b2e1793b61ea10462d72-l4llb is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 8 13:42:16.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8145' May 8 13:42:16.435: INFO: stderr: "" May 8 13:42:16.435: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:42:16.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8145" for this suite. May 8 13:42:38.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:42:38.556: INFO: namespace kubectl-8145 deletion completed in 22.103453005s • [SLOW TEST:41.086 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:42:38.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3222 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 13:42:38.650: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 13:43:02.887: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3222 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 13:43:02.887: INFO: >>> kubeConfig: /root/.kube/config I0508 13:43:02.926952 6 log.go:172] (0xc001c4e4d0) (0xc00192fb80) Create stream I0508 13:43:02.926979 6 log.go:172] (0xc001c4e4d0) (0xc00192fb80) Stream added, broadcasting: 1 I0508 13:43:02.928896 6 log.go:172] (0xc001c4e4d0) Reply frame received for 1 I0508 13:43:02.928957 6 log.go:172] (0xc001c4e4d0) (0xc002908780) Create stream I0508 13:43:02.928982 6 log.go:172] (0xc001c4e4d0) (0xc002908780) Stream added, broadcasting: 3 I0508 13:43:02.930390 6 log.go:172] (0xc001c4e4d0) Reply frame received for 3 I0508 13:43:02.930438 6 log.go:172] (0xc001c4e4d0) (0xc00192fd60) Create stream I0508 13:43:02.930456 6 log.go:172] (0xc001c4e4d0) (0xc00192fd60) Stream added, broadcasting: 5 I0508 13:43:02.931463 6 log.go:172] (0xc001c4e4d0) Reply frame received for 5 I0508 13:43:04.005644 6 log.go:172] (0xc001c4e4d0) Data frame received for 3 I0508 13:43:04.005680 6 log.go:172] (0xc002908780) (3) Data frame handling I0508 13:43:04.005688 6 log.go:172] (0xc002908780) (3) Data frame sent I0508 13:43:04.005693 6 log.go:172] (0xc001c4e4d0) Data frame received for 3 I0508 13:43:04.005697 6 log.go:172] (0xc002908780) (3) Data frame handling I0508 13:43:04.005730 6 log.go:172] (0xc001c4e4d0) Data frame received for 5 I0508 13:43:04.005780 6 log.go:172] (0xc00192fd60) (5) Data frame handling I0508 13:43:04.007885 6 log.go:172] (0xc001c4e4d0) Data frame received for 1 I0508 13:43:04.007915 6 log.go:172] (0xc00192fb80) (1) Data frame handling I0508 13:43:04.007961 6 log.go:172] (0xc00192fb80) (1) Data frame sent I0508 13:43:04.007993 6 log.go:172] (0xc001c4e4d0) (0xc00192fb80) Stream removed, broadcasting: 1 I0508 13:43:04.008017 6 log.go:172] (0xc001c4e4d0) Go away received I0508 13:43:04.008191 6 log.go:172] (0xc001c4e4d0) (0xc00192fb80) Stream removed, broadcasting: 1 I0508 13:43:04.008229 6 log.go:172] (0xc001c4e4d0) (0xc002908780) Stream removed, broadcasting: 3 I0508 13:43:04.008247 6 log.go:172] (0xc001c4e4d0) (0xc00192fd60) Stream removed, broadcasting: 5 May 8 13:43:04.008: INFO: Found all expected endpoints: [netserver-0] May 8 13:43:04.012: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.29 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3222 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 13:43:04.012: INFO: >>> kubeConfig: /root/.kube/config I0508 13:43:04.045838 6 log.go:172] (0xc00101cdc0) (0xc002812d20) Create stream I0508 13:43:04.045881 6 log.go:172] (0xc00101cdc0) (0xc002812d20) Stream added, broadcasting: 1 I0508 13:43:04.049407 6 log.go:172] (0xc00101cdc0) Reply frame received for 1 I0508 13:43:04.049447 6 log.go:172] (0xc00101cdc0) (0xc00154d680) Create stream I0508 13:43:04.049462 6 log.go:172] (0xc00101cdc0) (0xc00154d680) Stream added, broadcasting: 3 I0508 13:43:04.050760 6 log.go:172] (0xc00101cdc0) Reply frame received for 3 I0508 13:43:04.050790 6 log.go:172] (0xc00101cdc0) (0xc002908820) Create stream I0508 13:43:04.050808 6 log.go:172] (0xc00101cdc0) (0xc002908820) Stream added, broadcasting: 5 I0508 13:43:04.051800 6 log.go:172] (0xc00101cdc0) Reply frame received for 5 I0508 13:43:05.142772 6 log.go:172] (0xc00101cdc0) Data frame received for 3 I0508 13:43:05.142821 6 log.go:172] (0xc00154d680) (3) Data frame handling I0508 13:43:05.142845 6 log.go:172] (0xc00154d680) (3) Data frame sent I0508 13:43:05.142862 6 log.go:172] (0xc00101cdc0) Data frame received for 3 I0508 13:43:05.142872 6 log.go:172] (0xc00154d680) (3) Data frame handling I0508 13:43:05.143314 6 log.go:172] (0xc00101cdc0) Data frame received for 5 I0508 13:43:05.143342 6 log.go:172] (0xc002908820) (5) Data frame handling I0508 13:43:05.144915 6 log.go:172] (0xc00101cdc0) Data frame received for 1 I0508 13:43:05.144932 6 log.go:172] (0xc002812d20) (1) Data frame handling I0508 13:43:05.144959 6 log.go:172] (0xc002812d20) (1) Data frame sent I0508 13:43:05.144978 6 log.go:172] (0xc00101cdc0) (0xc002812d20) Stream removed, broadcasting: 1 I0508 13:43:05.145062 6 log.go:172] (0xc00101cdc0) (0xc002812d20) Stream removed, broadcasting: 1 I0508 13:43:05.145080 6 log.go:172] (0xc00101cdc0) (0xc00154d680) Stream removed, broadcasting: 3 I0508 13:43:05.145476 6 log.go:172] (0xc00101cdc0) (0xc002908820) Stream removed, broadcasting: 5 May 8 13:43:05.145: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:43:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0508 13:43:05.145758 6 log.go:172] (0xc00101cdc0) Go away received STEP: Destroying namespace "pod-network-test-3222" for this suite. May 8 13:43:27.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:43:27.276: INFO: namespace pod-network-test-3222 deletion completed in 22.126229573s • [SLOW TEST:48.720 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:43:27.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:43:27.377: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 8 13:43:27.383: INFO: Number of nodes with available pods: 0 May 8 13:43:27.383: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 8 13:43:27.484: INFO: Number of nodes with available pods: 0 May 8 13:43:27.484: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:28.490: INFO: Number of nodes with available pods: 0 May 8 13:43:28.490: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:29.489: INFO: Number of nodes with available pods: 0 May 8 13:43:29.489: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:30.489: INFO: Number of nodes with available pods: 0 May 8 13:43:30.489: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:31.489: INFO: Number of nodes with available pods: 1 May 8 13:43:31.489: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 8 13:43:31.527: INFO: Number of nodes with available pods: 1 May 8 13:43:31.527: INFO: Number of running nodes: 0, number of available pods: 1 May 8 13:43:32.531: INFO: Number of nodes with available pods: 0 May 8 13:43:32.531: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 8 13:43:32.560: INFO: Number of nodes with available pods: 0 May 8 13:43:32.560: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:33.564: INFO: Number of nodes with available pods: 0 May 8 13:43:33.564: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:34.565: INFO: Number of nodes with available pods: 0 May 8 13:43:34.565: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:35.564: INFO: Number of nodes with available pods: 0 May 8 13:43:35.564: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:36.565: INFO: Number of nodes with available pods: 0 May 8 13:43:36.565: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:37.564: INFO: Number of nodes with available pods: 0 May 8 13:43:37.564: INFO: Node iruya-worker is running more than one daemon pod May 8 13:43:38.565: INFO: Number of nodes with available pods: 1 May 8 13:43:38.565: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3054, will wait for the garbage collector to delete the pods May 8 13:43:38.631: INFO: Deleting DaemonSet.extensions daemon-set took: 7.111663ms May 8 13:43:38.931: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266822ms May 8 13:43:52.236: INFO: Number of nodes with available pods: 0 May 8 13:43:52.236: INFO: Number of running nodes: 0, number of available pods: 0 May 8 13:43:52.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3054/daemonsets","resourceVersion":"9716042"},"items":null} May 8 13:43:52.240: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3054/pods","resourceVersion":"9716042"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:43:52.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3054" for this suite. May 8 13:43:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:43:58.365: INFO: namespace daemonsets-3054 deletion completed in 6.083783026s • [SLOW TEST:31.088 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:43:58.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7456 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7456 STEP: Deleting pre-stop pod May 8 13:44:11.542: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:44:11.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7456" for this suite. May 8 13:44:49.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:44:49.673: INFO: namespace prestop-7456 deletion completed in 38.116884475s • [SLOW TEST:51.307 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:44:49.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-5324 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5324 to expose endpoints map[] May 8 13:44:49.814: INFO: Get endpoints failed (31.596345ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 8 13:44:50.818: INFO: successfully validated that service multi-endpoint-test in namespace services-5324 exposes endpoints map[] (1.035932407s elapsed) STEP: Creating pod pod1 in namespace services-5324 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5324 to expose endpoints map[pod1:[100]] May 8 13:44:53.941: INFO: successfully validated that service multi-endpoint-test in namespace services-5324 exposes endpoints map[pod1:[100]] (3.115598656s elapsed) STEP: Creating pod pod2 in namespace services-5324 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5324 to expose endpoints map[pod1:[100] pod2:[101]] May 8 13:44:58.003: INFO: successfully validated that service multi-endpoint-test in namespace services-5324 exposes endpoints map[pod1:[100] pod2:[101]] (4.057366904s elapsed) STEP: Deleting pod pod1 in namespace services-5324 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5324 to expose endpoints map[pod2:[101]] May 8 13:44:59.092: INFO: successfully validated that service multi-endpoint-test in namespace services-5324 exposes endpoints map[pod2:[101]] (1.083640742s elapsed) STEP: Deleting pod pod2 in namespace services-5324 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5324 to expose endpoints map[] May 8 13:45:00.140: INFO: successfully validated that service multi-endpoint-test in namespace services-5324 exposes endpoints map[] (1.043730828s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:45:00.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5324" for this suite. May 8 13:45:22.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:45:22.521: INFO: namespace services-5324 deletion completed in 22.122832832s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.847 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:45:22.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:45:22.608: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 8 13:45:27.612: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 13:45:27.612: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 8 13:45:27.673: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4868,SelfLink:/apis/apps/v1/namespaces/deployment-4868/deployments/test-cleanup-deployment,UID:4f99595c-74a7-496a-9d35-65d04c175aae,ResourceVersion:9716365,Generation:1,CreationTimestamp:2020-05-08 13:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 8 13:45:27.705: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4868,SelfLink:/apis/apps/v1/namespaces/deployment-4868/replicasets/test-cleanup-deployment-55bbcbc84c,UID:700fc515-7241-4f1c-be22-5099a88110c8,ResourceVersion:9716367,Generation:1,CreationTimestamp:2020-05-08 13:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4f99595c-74a7-496a-9d35-65d04c175aae 0xc002da3327 0xc002da3328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 13:45:27.705: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 8 13:45:27.705: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4868,SelfLink:/apis/apps/v1/namespaces/deployment-4868/replicasets/test-cleanup-controller,UID:d6b77832-96e8-43ca-85cf-1fc25da22a49,ResourceVersion:9716366,Generation:1,CreationTimestamp:2020-05-08 13:45:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4f99595c-74a7-496a-9d35-65d04c175aae 0xc002da3047 0xc002da3048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 13:45:27.801: INFO: Pod "test-cleanup-controller-b96vd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-b96vd,GenerateName:test-cleanup-controller-,Namespace:deployment-4868,SelfLink:/api/v1/namespaces/deployment-4868/pods/test-cleanup-controller-b96vd,UID:44a0c98b-1340-43f6-b90e-762a04fa05f5,ResourceVersion:9716359,Generation:0,CreationTimestamp:2020-05-08 13:45:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller d6b77832-96e8-43ca-85cf-1fc25da22a49 0xc0031aa3c7 0xc0031aa3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rfp6r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rfp6r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rfp6r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031aa440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031aa460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:45:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:45:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:45:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:45:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.134,StartTime:2020-05-08 13:45:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:45:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3dc51cf389b49770a57f5900d128e83941bcb4c2429f35e9cb7532925db336ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:45:27.801: INFO: Pod "test-cleanup-deployment-55bbcbc84c-d575f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-d575f,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4868,SelfLink:/api/v1/namespaces/deployment-4868/pods/test-cleanup-deployment-55bbcbc84c-d575f,UID:f024b8cf-f88a-4c85-adf6-96b3a065b0ce,ResourceVersion:9716373,Generation:0,CreationTimestamp:2020-05-08 13:45:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 700fc515-7241-4f1c-be22-5099a88110c8 0xc0031aa547 0xc0031aa548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rfp6r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rfp6r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rfp6r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031aa5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031aa5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:45:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:45:27.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4868" for this suite. May 8 13:45:33.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:45:33.920: INFO: namespace deployment-4868 deletion completed in 6.110816804s • [SLOW TEST:11.399 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:45:33.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:46:02.038: INFO: Container started at 2020-05-08 13:45:36 +0000 UTC, pod became ready at 2020-05-08 13:46:00 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:46:02.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8456" for this suite. May 8 13:46:24.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:46:24.108: INFO: namespace container-probe-8456 deletion completed in 22.065586493s • [SLOW TEST:50.187 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:46:24.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 8 13:46:24.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4732 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 8 13:46:29.871: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0508 13:46:29.788524 2809 log.go:172] (0xc0009d80b0) (0xc00071c140) Create stream\nI0508 13:46:29.788549 2809 log.go:172] (0xc0009d80b0) (0xc00071c140) Stream added, broadcasting: 1\nI0508 13:46:29.791937 2809 log.go:172] (0xc0009d80b0) Reply frame received for 1\nI0508 13:46:29.791975 2809 log.go:172] (0xc0009d80b0) (0xc000501540) Create stream\nI0508 13:46:29.791994 2809 log.go:172] (0xc0009d80b0) (0xc000501540) Stream added, broadcasting: 3\nI0508 13:46:29.792689 2809 log.go:172] (0xc0009d80b0) Reply frame received for 3\nI0508 13:46:29.792717 2809 log.go:172] (0xc0009d80b0) (0xc0003041e0) Create stream\nI0508 13:46:29.792727 2809 log.go:172] (0xc0009d80b0) (0xc0003041e0) Stream added, broadcasting: 5\nI0508 13:46:29.793825 2809 log.go:172] (0xc0009d80b0) Reply frame received for 5\nI0508 13:46:29.793874 2809 log.go:172] (0xc0009d80b0) (0xc00071c000) Create stream\nI0508 13:46:29.793887 2809 log.go:172] (0xc0009d80b0) (0xc00071c000) Stream added, broadcasting: 7\nI0508 13:46:29.794676 2809 log.go:172] (0xc0009d80b0) Reply frame received for 7\nI0508 13:46:29.794851 2809 log.go:172] (0xc000501540) (3) Writing data frame\nI0508 13:46:29.794934 2809 log.go:172] (0xc000501540) (3) Writing data frame\nI0508 13:46:29.795670 2809 log.go:172] (0xc0009d80b0) Data frame received for 5\nI0508 13:46:29.795693 2809 log.go:172] (0xc0003041e0) (5) Data frame handling\nI0508 13:46:29.795700 2809 log.go:172] (0xc0003041e0) (5) Data frame sent\nI0508 13:46:29.796472 2809 log.go:172] (0xc0009d80b0) Data frame received for 5\nI0508 13:46:29.796503 2809 log.go:172] (0xc0003041e0) (5) Data frame handling\nI0508 13:46:29.796524 2809 log.go:172] (0xc0003041e0) (5) Data frame sent\nI0508 13:46:29.847946 2809 log.go:172] (0xc0009d80b0) Data frame received for 5\nI0508 13:46:29.847991 2809 log.go:172] (0xc0009d80b0) Data frame received for 7\nI0508 13:46:29.848021 2809 log.go:172] (0xc00071c000) (7) Data frame handling\nI0508 13:46:29.848045 2809 log.go:172] (0xc0003041e0) (5) Data frame handling\nI0508 13:46:29.848451 2809 log.go:172] (0xc0009d80b0) Data frame received for 1\nI0508 13:46:29.848470 2809 log.go:172] (0xc00071c140) (1) Data frame handling\nI0508 13:46:29.848490 2809 log.go:172] (0xc00071c140) (1) Data frame sent\nI0508 13:46:29.848505 2809 log.go:172] (0xc0009d80b0) (0xc00071c140) Stream removed, broadcasting: 1\nI0508 13:46:29.848579 2809 log.go:172] (0xc0009d80b0) (0xc00071c140) Stream removed, broadcasting: 1\nI0508 13:46:29.848599 2809 log.go:172] (0xc0009d80b0) (0xc000501540) Stream removed, broadcasting: 3\nI0508 13:46:29.848621 2809 log.go:172] (0xc0009d80b0) (0xc0003041e0) Stream removed, broadcasting: 5\nI0508 13:46:29.848775 2809 log.go:172] (0xc0009d80b0) (0xc00071c000) Stream removed, broadcasting: 7\nI0508 13:46:29.848944 2809 log.go:172] (0xc0009d80b0) Go away received\n" May 8 13:46:29.871: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:46:31.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4732" for this suite. May 8 13:46:43.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:46:43.970: INFO: namespace kubectl-4732 deletion completed in 12.087732952s • [SLOW TEST:19.862 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:46:43.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 13:46:52.079: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:46:52.099: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:46:54.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:46:54.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:46:56.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:46:56.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:46:58.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:46:58.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:00.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:00.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:02.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:02.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:04.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:04.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:06.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:06.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:08.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:08.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:10.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:10.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:12.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:12.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:14.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:14.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:16.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:16.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:18.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:18.103: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:20.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:20.104: INFO: Pod pod-with-poststart-exec-hook still exists May 8 13:47:22.100: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 13:47:22.110: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:47:22.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7175" for this suite. May 8 13:47:44.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:47:44.232: INFO: namespace container-lifecycle-hook-7175 deletion completed in 22.119488142s • [SLOW TEST:60.262 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:47:44.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 8 13:47:48.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-c54949be-4c75-4a0f-83ad-6c82c247e2c2 -c busybox-main-container --namespace=emptydir-6551 -- cat /usr/share/volumeshare/shareddata.txt' May 8 13:47:48.558: INFO: stderr: "I0508 13:47:48.473592 2835 log.go:172] (0xc0009b2420) (0xc0008f6820) Create stream\nI0508 13:47:48.473641 2835 log.go:172] (0xc0009b2420) (0xc0008f6820) Stream added, broadcasting: 1\nI0508 13:47:48.476080 2835 log.go:172] (0xc0009b2420) Reply frame received for 1\nI0508 13:47:48.476133 2835 log.go:172] (0xc0009b2420) (0xc0006d41e0) Create stream\nI0508 13:47:48.476150 2835 log.go:172] (0xc0009b2420) (0xc0006d41e0) Stream added, broadcasting: 3\nI0508 13:47:48.477693 2835 log.go:172] (0xc0009b2420) Reply frame received for 3\nI0508 13:47:48.477730 2835 log.go:172] (0xc0009b2420) (0xc0008f68c0) Create stream\nI0508 13:47:48.477742 2835 log.go:172] (0xc0009b2420) (0xc0008f68c0) Stream added, broadcasting: 5\nI0508 13:47:48.478764 2835 log.go:172] (0xc0009b2420) Reply frame received for 5\nI0508 13:47:48.552705 2835 log.go:172] (0xc0009b2420) Data frame received for 5\nI0508 13:47:48.552776 2835 log.go:172] (0xc0008f68c0) (5) Data frame handling\nI0508 13:47:48.552805 2835 log.go:172] (0xc0009b2420) Data frame received for 3\nI0508 13:47:48.552820 2835 log.go:172] (0xc0006d41e0) (3) Data frame handling\nI0508 13:47:48.552842 2835 log.go:172] (0xc0006d41e0) (3) Data frame sent\nI0508 13:47:48.552857 2835 log.go:172] (0xc0009b2420) Data frame received for 3\nI0508 13:47:48.552868 2835 log.go:172] (0xc0006d41e0) (3) Data frame handling\nI0508 13:47:48.554321 2835 log.go:172] (0xc0009b2420) Data frame received for 1\nI0508 13:47:48.554354 2835 log.go:172] (0xc0008f6820) (1) Data frame handling\nI0508 13:47:48.554376 2835 log.go:172] (0xc0008f6820) (1) Data frame sent\nI0508 13:47:48.554392 2835 log.go:172] (0xc0009b2420) (0xc0008f6820) Stream removed, broadcasting: 1\nI0508 13:47:48.554406 2835 log.go:172] (0xc0009b2420) Go away received\nI0508 13:47:48.554836 2835 log.go:172] (0xc0009b2420) (0xc0008f6820) Stream removed, broadcasting: 1\nI0508 13:47:48.554853 2835 log.go:172] (0xc0009b2420) (0xc0006d41e0) Stream removed, broadcasting: 3\nI0508 13:47:48.554862 2835 log.go:172] (0xc0009b2420) (0xc0008f68c0) Stream removed, broadcasting: 5\n" May 8 13:47:48.558: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:47:48.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6551" for this suite. May 8 13:47:54.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:47:54.656: INFO: namespace emptydir-6551 deletion completed in 6.093541229s • [SLOW TEST:10.423 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:47:54.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-00fbf563-f410-4113-a7a8-e371d44135b3 STEP: Creating a pod to test consume configMaps May 8 13:47:54.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad" in namespace "configmap-2903" to be "success or failure" May 8 13:47:54.723: INFO: Pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.563511ms May 8 13:47:56.727: INFO: Pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00788229s May 8 13:47:58.732: INFO: Pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.012290359s May 8 13:48:00.736: INFO: Pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015984001s STEP: Saw pod success May 8 13:48:00.736: INFO: Pod "pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad" satisfied condition "success or failure" May 8 13:48:00.739: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad container configmap-volume-test: STEP: delete the pod May 8 13:48:00.780: INFO: Waiting for pod pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad to disappear May 8 13:48:00.790: INFO: Pod pod-configmaps-cbe1c845-6ea6-41a3-8ef0-4ac4567ce4ad no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:48:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2903" for this suite. May 8 13:48:06.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:48:06.887: INFO: namespace configmap-2903 deletion completed in 6.094555094s • [SLOW TEST:12.231 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:48:06.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 13:48:06.996: INFO: Waiting up to 5m0s for pod "pod-d1aa536d-427b-4b52-95ef-8c3458b301b0" in namespace "emptydir-9106" to be "success or failure" May 8 13:48:07.031: INFO: Pod "pod-d1aa536d-427b-4b52-95ef-8c3458b301b0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.795881ms May 8 13:48:09.066: INFO: Pod "pod-d1aa536d-427b-4b52-95ef-8c3458b301b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070384241s May 8 13:48:11.070: INFO: Pod "pod-d1aa536d-427b-4b52-95ef-8c3458b301b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074590658s STEP: Saw pod success May 8 13:48:11.071: INFO: Pod "pod-d1aa536d-427b-4b52-95ef-8c3458b301b0" satisfied condition "success or failure" May 8 13:48:11.074: INFO: Trying to get logs from node iruya-worker2 pod pod-d1aa536d-427b-4b52-95ef-8c3458b301b0 container test-container: STEP: delete the pod May 8 13:48:11.089: INFO: Waiting for pod pod-d1aa536d-427b-4b52-95ef-8c3458b301b0 to disappear May 8 13:48:11.093: INFO: Pod pod-d1aa536d-427b-4b52-95ef-8c3458b301b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:48:11.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9106" for this suite. May 8 13:48:17.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:48:17.260: INFO: namespace emptydir-9106 deletion completed in 6.163374382s • [SLOW TEST:10.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:48:17.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2eb51522-4400-4fcb-a101-739e2da332c5 STEP: Creating a pod to test consume configMaps May 8 13:48:17.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576" in namespace "configmap-6005" to be "success or failure" May 8 13:48:17.402: INFO: Pod "pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576": Phase="Pending", Reason="", readiness=false. Elapsed: 59.650661ms May 8 13:48:19.471: INFO: Pod "pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129317426s May 8 13:48:21.475: INFO: Pod "pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133584154s STEP: Saw pod success May 8 13:48:21.476: INFO: Pod "pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576" satisfied condition "success or failure" May 8 13:48:21.479: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576 container configmap-volume-test: STEP: delete the pod May 8 13:48:21.532: INFO: Waiting for pod pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576 to disappear May 8 13:48:21.549: INFO: Pod pod-configmaps-5f7b7030-d64a-4b6e-9ca6-3024b8ce3576 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:48:21.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6005" for this suite. May 8 13:48:27.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:48:27.644: INFO: namespace configmap-6005 deletion completed in 6.090550535s • [SLOW TEST:10.382 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:48:27.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 13:48:33.752: INFO: DNS probes using dns-test-8c4143a5-aec0-40e5-9360-555b3c2ba9d6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 13:48:39.898: INFO: File wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:39.901: INFO: File jessie_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:39.901: INFO: Lookups using dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 failed for: [wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local jessie_udp@dns-test-service-3.dns-159.svc.cluster.local] May 8 13:48:44.906: INFO: File wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:44.910: INFO: File jessie_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:44.910: INFO: Lookups using dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 failed for: [wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local jessie_udp@dns-test-service-3.dns-159.svc.cluster.local] May 8 13:48:49.907: INFO: File wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:49.911: INFO: File jessie_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:49.911: INFO: Lookups using dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 failed for: [wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local jessie_udp@dns-test-service-3.dns-159.svc.cluster.local] May 8 13:48:54.905: INFO: File wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:54.907: INFO: File jessie_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:54.907: INFO: Lookups using dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 failed for: [wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local jessie_udp@dns-test-service-3.dns-159.svc.cluster.local] May 8 13:48:59.907: INFO: File wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:59.911: INFO: File jessie_udp@dns-test-service-3.dns-159.svc.cluster.local from pod dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 13:48:59.911: INFO: Lookups using dns-159/dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 failed for: [wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local jessie_udp@dns-test-service-3.dns-159.svc.cluster.local] May 8 13:49:04.908: INFO: DNS probes using dns-test-c30f6d59-b07e-4438-9fde-07e8e3813c89 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-159.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-159.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 13:49:11.798: INFO: DNS probes using dns-test-41b77a34-5fdf-43df-a5f1-93851d0b5bf8 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:49:11.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-159" for this suite. May 8 13:49:17.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:49:18.044: INFO: namespace dns-159 deletion completed in 6.131307875s • [SLOW TEST:50.400 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:49:18.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1585 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 8 13:49:18.153: INFO: Found 0 stateful pods, waiting for 3 May 8 13:49:28.163: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:49:28.163: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:49:28.163: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 8 13:49:38.158: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 8 13:49:38.158: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 8 13:49:38.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 8 13:49:38.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1585 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:49:38.443: INFO: stderr: "I0508 13:49:38.299400 2856 log.go:172] (0xc000830420) (0xc0003de6e0) Create stream\nI0508 13:49:38.299451 2856 log.go:172] (0xc000830420) (0xc0003de6e0) Stream added, broadcasting: 1\nI0508 13:49:38.302085 2856 log.go:172] (0xc000830420) Reply frame received for 1\nI0508 13:49:38.302136 2856 log.go:172] (0xc000830420) (0xc000856000) Create stream\nI0508 13:49:38.302154 2856 log.go:172] (0xc000830420) (0xc000856000) Stream added, broadcasting: 3\nI0508 13:49:38.303200 2856 log.go:172] (0xc000830420) Reply frame received for 3\nI0508 13:49:38.303237 2856 log.go:172] (0xc000830420) (0xc000964000) Create stream\nI0508 13:49:38.303251 2856 log.go:172] (0xc000830420) (0xc000964000) Stream added, broadcasting: 5\nI0508 13:49:38.304442 2856 log.go:172] (0xc000830420) Reply frame received for 5\nI0508 13:49:38.385853 2856 log.go:172] (0xc000830420) Data frame received for 5\nI0508 13:49:38.385891 2856 log.go:172] (0xc000964000) (5) Data frame handling\nI0508 13:49:38.385912 2856 log.go:172] (0xc000964000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:49:38.435612 2856 log.go:172] (0xc000830420) Data frame received for 3\nI0508 13:49:38.435638 2856 log.go:172] (0xc000856000) (3) Data frame handling\nI0508 13:49:38.435661 2856 log.go:172] (0xc000856000) (3) Data frame sent\nI0508 13:49:38.435831 2856 log.go:172] (0xc000830420) Data frame received for 5\nI0508 13:49:38.435843 2856 log.go:172] (0xc000964000) (5) Data frame handling\nI0508 13:49:38.435876 2856 log.go:172] (0xc000830420) Data frame received for 3\nI0508 13:49:38.435893 2856 log.go:172] (0xc000856000) (3) Data frame handling\nI0508 13:49:38.437739 2856 log.go:172] (0xc000830420) Data frame received for 1\nI0508 13:49:38.437751 2856 log.go:172] (0xc0003de6e0) (1) Data frame handling\nI0508 13:49:38.437758 2856 log.go:172] (0xc0003de6e0) (1) Data frame sent\nI0508 13:49:38.437766 2856 log.go:172] (0xc000830420) (0xc0003de6e0) Stream removed, broadcasting: 1\nI0508 13:49:38.437773 2856 log.go:172] (0xc000830420) Go away received\nI0508 13:49:38.438211 2856 log.go:172] (0xc000830420) (0xc0003de6e0) Stream removed, broadcasting: 1\nI0508 13:49:38.438236 2856 log.go:172] (0xc000830420) (0xc000856000) Stream removed, broadcasting: 3\nI0508 13:49:38.438249 2856 log.go:172] (0xc000830420) (0xc000964000) Stream removed, broadcasting: 5\n" May 8 13:49:38.443: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:49:38.443: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 8 13:49:48.476: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 8 13:49:58.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1585 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:49:58.742: INFO: stderr: "I0508 13:49:58.636481 2882 log.go:172] (0xc0007aa370) (0xc000206820) Create stream\nI0508 13:49:58.636560 2882 log.go:172] (0xc0007aa370) (0xc000206820) Stream added, broadcasting: 1\nI0508 13:49:58.639581 2882 log.go:172] (0xc0007aa370) Reply frame received for 1\nI0508 13:49:58.639636 2882 log.go:172] (0xc0007aa370) (0xc0005a23c0) Create stream\nI0508 13:49:58.639653 2882 log.go:172] (0xc0007aa370) (0xc0005a23c0) Stream added, broadcasting: 3\nI0508 13:49:58.640754 2882 log.go:172] (0xc0007aa370) Reply frame received for 3\nI0508 13:49:58.640794 2882 log.go:172] (0xc0007aa370) (0xc0002068c0) Create stream\nI0508 13:49:58.640806 2882 log.go:172] (0xc0007aa370) (0xc0002068c0) Stream added, broadcasting: 5\nI0508 13:49:58.642083 2882 log.go:172] (0xc0007aa370) Reply frame received for 5\nI0508 13:49:58.733762 2882 log.go:172] (0xc0007aa370) Data frame received for 3\nI0508 13:49:58.733811 2882 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0508 13:49:58.733847 2882 log.go:172] (0xc0005a23c0) (3) Data frame sent\nI0508 13:49:58.733870 2882 log.go:172] (0xc0007aa370) Data frame received for 3\nI0508 13:49:58.733888 2882 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0508 13:49:58.734169 2882 log.go:172] (0xc0007aa370) Data frame received for 5\nI0508 13:49:58.734199 2882 log.go:172] (0xc0002068c0) (5) Data frame handling\nI0508 13:49:58.734230 2882 log.go:172] (0xc0002068c0) (5) Data frame sent\nI0508 13:49:58.734247 2882 log.go:172] (0xc0007aa370) Data frame received for 5\nI0508 13:49:58.734277 2882 log.go:172] (0xc0002068c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:49:58.736043 2882 log.go:172] (0xc0007aa370) Data frame received for 1\nI0508 13:49:58.736076 2882 log.go:172] (0xc000206820) (1) Data frame handling\nI0508 13:49:58.736104 2882 log.go:172] (0xc000206820) (1) Data frame sent\nI0508 13:49:58.736136 2882 log.go:172] (0xc0007aa370) (0xc000206820) Stream removed, broadcasting: 1\nI0508 13:49:58.736164 2882 log.go:172] (0xc0007aa370) Go away received\nI0508 13:49:58.737004 2882 log.go:172] (0xc0007aa370) (0xc000206820) Stream removed, broadcasting: 1\nI0508 13:49:58.737038 2882 log.go:172] (0xc0007aa370) (0xc0005a23c0) Stream removed, broadcasting: 3\nI0508 13:49:58.737074 2882 log.go:172] (0xc0007aa370) (0xc0002068c0) Stream removed, broadcasting: 5\n" May 8 13:49:58.742: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:49:58.742: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' STEP: Rolling back to a previous revision May 8 13:50:18.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1585 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 8 13:50:19.028: INFO: stderr: "I0508 13:50:18.899930 2904 log.go:172] (0xc0009c2580) (0xc000750820) Create stream\nI0508 13:50:18.899986 2904 log.go:172] (0xc0009c2580) (0xc000750820) Stream added, broadcasting: 1\nI0508 13:50:18.903513 2904 log.go:172] (0xc0009c2580) Reply frame received for 1\nI0508 13:50:18.903579 2904 log.go:172] (0xc0009c2580) (0xc000750000) Create stream\nI0508 13:50:18.903606 2904 log.go:172] (0xc0009c2580) (0xc000750000) Stream added, broadcasting: 3\nI0508 13:50:18.904511 2904 log.go:172] (0xc0009c2580) Reply frame received for 3\nI0508 13:50:18.904545 2904 log.go:172] (0xc0009c2580) (0xc000750140) Create stream\nI0508 13:50:18.904567 2904 log.go:172] (0xc0009c2580) (0xc000750140) Stream added, broadcasting: 5\nI0508 13:50:18.905619 2904 log.go:172] (0xc0009c2580) Reply frame received for 5\nI0508 13:50:18.983441 2904 log.go:172] (0xc0009c2580) Data frame received for 5\nI0508 13:50:18.983484 2904 log.go:172] (0xc000750140) (5) Data frame handling\nI0508 13:50:18.983521 2904 log.go:172] (0xc000750140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0508 13:50:19.020017 2904 log.go:172] (0xc0009c2580) Data frame received for 3\nI0508 13:50:19.020061 2904 log.go:172] (0xc000750000) (3) Data frame handling\nI0508 13:50:19.020099 2904 log.go:172] (0xc000750000) (3) Data frame sent\nI0508 13:50:19.020123 2904 log.go:172] (0xc0009c2580) Data frame received for 3\nI0508 13:50:19.020143 2904 log.go:172] (0xc000750000) (3) Data frame handling\nI0508 13:50:19.020167 2904 log.go:172] (0xc0009c2580) Data frame received for 5\nI0508 13:50:19.020193 2904 log.go:172] (0xc000750140) (5) Data frame handling\nI0508 13:50:19.022467 2904 log.go:172] (0xc0009c2580) Data frame received for 1\nI0508 13:50:19.022504 2904 log.go:172] (0xc000750820) (1) Data frame handling\nI0508 13:50:19.022570 2904 log.go:172] (0xc000750820) (1) Data frame sent\nI0508 13:50:19.022615 2904 log.go:172] (0xc0009c2580) (0xc000750820) Stream removed, broadcasting: 1\nI0508 13:50:19.022648 2904 log.go:172] (0xc0009c2580) Go away received\nI0508 13:50:19.023018 2904 log.go:172] (0xc0009c2580) (0xc000750820) Stream removed, broadcasting: 1\nI0508 13:50:19.023044 2904 log.go:172] (0xc0009c2580) (0xc000750000) Stream removed, broadcasting: 3\nI0508 13:50:19.023057 2904 log.go:172] (0xc0009c2580) (0xc000750140) Stream removed, broadcasting: 5\n" May 8 13:50:19.028: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 8 13:50:19.028: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 8 13:50:29.060: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 8 13:50:39.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1585 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 8 13:50:39.317: INFO: stderr: "I0508 13:50:39.233724 2925 log.go:172] (0xc00094e0b0) (0xc0007ac1e0) Create stream\nI0508 13:50:39.233770 2925 log.go:172] (0xc00094e0b0) (0xc0007ac1e0) Stream added, broadcasting: 1\nI0508 13:50:39.235678 2925 log.go:172] (0xc00094e0b0) Reply frame received for 1\nI0508 13:50:39.235708 2925 log.go:172] (0xc00094e0b0) (0xc0007ac6e0) Create stream\nI0508 13:50:39.235717 2925 log.go:172] (0xc00094e0b0) (0xc0007ac6e0) Stream added, broadcasting: 3\nI0508 13:50:39.236375 2925 log.go:172] (0xc00094e0b0) Reply frame received for 3\nI0508 13:50:39.236408 2925 log.go:172] (0xc00094e0b0) (0xc00089c000) Create stream\nI0508 13:50:39.236421 2925 log.go:172] (0xc00094e0b0) (0xc00089c000) Stream added, broadcasting: 5\nI0508 13:50:39.237087 2925 log.go:172] (0xc00094e0b0) Reply frame received for 5\nI0508 13:50:39.310257 2925 log.go:172] (0xc00094e0b0) Data frame received for 3\nI0508 13:50:39.310303 2925 log.go:172] (0xc0007ac6e0) (3) Data frame handling\nI0508 13:50:39.310325 2925 log.go:172] (0xc0007ac6e0) (3) Data frame sent\nI0508 13:50:39.310340 2925 log.go:172] (0xc00094e0b0) Data frame received for 3\nI0508 13:50:39.310359 2925 log.go:172] (0xc0007ac6e0) (3) Data frame handling\nI0508 13:50:39.310375 2925 log.go:172] (0xc00094e0b0) Data frame received for 5\nI0508 13:50:39.310391 2925 log.go:172] (0xc00089c000) (5) Data frame handling\nI0508 13:50:39.310420 2925 log.go:172] (0xc00089c000) (5) Data frame sent\nI0508 13:50:39.310445 2925 log.go:172] (0xc00094e0b0) Data frame received for 5\nI0508 13:50:39.310460 2925 log.go:172] (0xc00089c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0508 13:50:39.311971 2925 log.go:172] (0xc00094e0b0) Data frame received for 1\nI0508 13:50:39.311992 2925 log.go:172] (0xc0007ac1e0) (1) Data frame handling\nI0508 13:50:39.312021 2925 log.go:172] (0xc0007ac1e0) (1) Data frame sent\nI0508 13:50:39.312063 2925 log.go:172] (0xc00094e0b0) (0xc0007ac1e0) Stream removed, broadcasting: 1\nI0508 13:50:39.312084 2925 log.go:172] (0xc00094e0b0) Go away received\nI0508 13:50:39.312574 2925 log.go:172] (0xc00094e0b0) (0xc0007ac1e0) Stream removed, broadcasting: 1\nI0508 13:50:39.312600 2925 log.go:172] (0xc00094e0b0) (0xc0007ac6e0) Stream removed, broadcasting: 3\nI0508 13:50:39.312610 2925 log.go:172] (0xc00094e0b0) (0xc00089c000) Stream removed, broadcasting: 5\n" May 8 13:50:39.317: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 8 13:50:39.317: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 8 13:50:49.352: INFO: Waiting for StatefulSet statefulset-1585/ss2 to complete update May 8 13:50:49.352: INFO: Waiting for Pod statefulset-1585/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 8 13:50:49.352: INFO: Waiting for Pod statefulset-1585/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 8 13:50:49.352: INFO: Waiting for Pod statefulset-1585/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 8 13:50:59.359: INFO: Waiting for StatefulSet statefulset-1585/ss2 to complete update May 8 13:50:59.359: INFO: Waiting for Pod statefulset-1585/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 8 13:51:09.361: INFO: Deleting all statefulset in ns statefulset-1585 May 8 13:51:09.364: INFO: Scaling statefulset ss2 to 0 May 8 13:51:29.398: INFO: Waiting for statefulset status.replicas updated to 0 May 8 13:51:29.401: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:51:29.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1585" for this suite. May 8 13:51:37.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:51:37.507: INFO: namespace statefulset-1585 deletion completed in 8.084117847s • [SLOW TEST:139.463 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:51:37.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:51:43.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5498" for this suite. May 8 13:51:49.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:51:49.865: INFO: namespace namespaces-5498 deletion completed in 6.08318025s STEP: Destroying namespace "nsdeletetest-2143" for this suite. May 8 13:51:49.866: INFO: Namespace nsdeletetest-2143 was already deleted STEP: Destroying namespace "nsdeletetest-9333" for this suite. May 8 13:51:55.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:51:55.961: INFO: namespace nsdeletetest-9333 deletion completed in 6.094773793s • [SLOW TEST:18.454 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:51:55.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-408679c6-4ff8-42cc-aefa-fdee05cceb93 STEP: Creating a pod to test consume configMaps May 8 13:51:56.032: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29" in namespace "configmap-7250" to be "success or failure" May 8 13:51:56.036: INFO: Pod "pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29": Phase="Pending", Reason="", readiness=false. Elapsed: 3.410311ms May 8 13:51:58.040: INFO: Pod "pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008116251s May 8 13:52:00.045: INFO: Pod "pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012632532s STEP: Saw pod success May 8 13:52:00.045: INFO: Pod "pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29" satisfied condition "success or failure" May 8 13:52:00.048: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29 container configmap-volume-test: STEP: delete the pod May 8 13:52:00.102: INFO: Waiting for pod pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29 to disappear May 8 13:52:00.108: INFO: Pod pod-configmaps-ebc0dba4-dcab-4ae1-963a-be699befea29 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:52:00.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7250" for this suite. May 8 13:52:06.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:52:06.200: INFO: namespace configmap-7250 deletion completed in 6.088862846s • [SLOW TEST:10.238 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:52:06.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:52:10.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4150" for this suite. May 8 13:52:56.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:52:56.424: INFO: namespace kubelet-test-4150 deletion completed in 46.096109741s • [SLOW TEST:50.224 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:52:56.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:52:56.466: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:53:00.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1760" for this suite. May 8 13:53:46.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:53:46.734: INFO: namespace pods-1760 deletion completed in 46.119866308s • [SLOW TEST:50.309 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:53:46.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 13:53:46.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5882' May 8 13:53:49.393: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 13:53:49.393: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 8 13:53:49.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5882' May 8 13:53:49.522: INFO: stderr: "" May 8 13:53:49.522: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:53:49.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5882" for this suite. May 8 13:53:55.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:53:55.619: INFO: namespace kubectl-5882 deletion completed in 6.093049304s • [SLOW TEST:8.884 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:53:55.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 8 13:53:59.689: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-76bf95ea-a832-485b-9e85-9da615aee69e,GenerateName:,Namespace:events-1259,SelfLink:/api/v1/namespaces/events-1259/pods/send-events-76bf95ea-a832-485b-9e85-9da615aee69e,UID:28b4630a-1369-4aa0-a6ea-fb11a2e1ce18,ResourceVersion:9718202,Generation:0,CreationTimestamp:2020-05-08 13:53:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 664560409,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wxpqf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wxpqf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wxpqf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002db1bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002db1bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:53:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:53:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:53:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:53:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.147,StartTime:2020-05-08 13:53:55 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-08 13:53:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://0d613006e127ad83cba4205f2fe0fa73f2a057ddc2a114fa7e7c8dc94c7bf860}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 8 13:54:01.694: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 8 13:54:03.704: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:54:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1259" for this suite. May 8 13:54:43.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:54:43.906: INFO: namespace events-1259 deletion completed in 40.170857358s • [SLOW TEST:48.288 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:54:43.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 13:54:44.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2101' May 8 13:54:44.110: INFO: stderr: "" May 8 13:54:44.110: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 8 13:54:49.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2101 -o json' May 8 13:54:49.258: INFO: stderr: "" May 8 13:54:49.258: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-08T13:54:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2101\",\n \"resourceVersion\": \"9718323\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2101/pods/e2e-test-nginx-pod\",\n \"uid\": \"c775e3b9-6f4a-4bf9-8327-24965a0ef82a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kf4q8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kf4q8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kf4q8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T13:54:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T13:54:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T13:54:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T13:54:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d80697eb5a25c9cf0fdbe21c41141fb6a17f9d6c476baf3cd786d823fd7ab754\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-08T13:54:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.45\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-08T13:54:44Z\"\n }\n}\n" STEP: replace the image in the pod May 8 13:54:49.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2101' May 8 13:54:49.540: INFO: stderr: "" May 8 13:54:49.540: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 8 13:54:49.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2101' May 8 13:55:01.864: INFO: stderr: "" May 8 13:55:01.864: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:55:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2101" for this suite. May 8 13:55:07.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:55:07.996: INFO: namespace kubectl-2101 deletion completed in 6.127826718s • [SLOW TEST:24.089 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:55:07.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 8 13:55:08.108: INFO: PodSpec: initContainers in spec.initContainers May 8 13:55:53.118: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fdf02b07-8522-4324-890d-8f538657336f", GenerateName:"", Namespace:"init-container-3230", SelfLink:"/api/v1/namespaces/init-container-3230/pods/pod-init-fdf02b07-8522-4324-890d-8f538657336f", UID:"6e7e1915-0564-4862-a0f2-a69e1393e3c6", ResourceVersion:"9718502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724542908, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"108368428"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xn9xm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e4a1c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xn9xm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xn9xm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xn9xm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00302a2c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d2e060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00302a350)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00302a380)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00302a388), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00302a38c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724542908, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724542908, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724542908, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724542908, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.148", StartTime:(*v1.Time)(0xc00277a340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002574150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025741c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://eea7e5c65681db14d3e3145037cbc4bd234ff10722478f35e064badddd9b3e8d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00277a380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00277a360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:55:53.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3230" for this suite. May 8 13:56:15.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:56:15.354: INFO: namespace init-container-3230 deletion completed in 22.11901366s • [SLOW TEST:67.358 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:56:15.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:56:15.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb" in namespace "downward-api-4127" to be "success or failure" May 8 13:56:15.456: INFO: Pod "downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.498866ms May 8 13:56:17.460: INFO: Pod "downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038410215s May 8 13:56:19.464: INFO: Pod "downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042522231s STEP: Saw pod success May 8 13:56:19.464: INFO: Pod "downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb" satisfied condition "success or failure" May 8 13:56:19.466: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb container client-container: STEP: delete the pod May 8 13:56:19.494: INFO: Waiting for pod downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb to disappear May 8 13:56:19.503: INFO: Pod downwardapi-volume-703b7aa9-5ba9-48c1-bd97-191569d99ceb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:56:19.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4127" for this suite. May 8 13:56:25.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:56:25.605: INFO: namespace downward-api-4127 deletion completed in 6.099606517s • [SLOW TEST:10.251 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:56:25.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 13:56:25.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350" in namespace "projected-4623" to be "success or failure" May 8 13:56:25.705: INFO: Pod "downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350": Phase="Pending", Reason="", readiness=false. Elapsed: 3.719393ms May 8 13:56:27.709: INFO: Pod "downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007720481s May 8 13:56:29.714: INFO: Pod "downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012025894s STEP: Saw pod success May 8 13:56:29.714: INFO: Pod "downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350" satisfied condition "success or failure" May 8 13:56:29.716: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350 container client-container: STEP: delete the pod May 8 13:56:29.780: INFO: Waiting for pod downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350 to disappear May 8 13:56:29.795: INFO: Pod downwardapi-volume-15af4cc6-d9e0-4b73-b697-bbc4db192350 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:56:29.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4623" for this suite. May 8 13:56:35.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:56:35.887: INFO: namespace projected-4623 deletion completed in 6.087733019s • [SLOW TEST:10.281 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:56:35.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-3c54f93b-effa-4685-b8ef-61061cccd988 STEP: Creating configMap with name cm-test-opt-upd-2f764c57-23b7-4248-a0a9-31dfb90f074a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3c54f93b-effa-4685-b8ef-61061cccd988 STEP: Updating configmap cm-test-opt-upd-2f764c57-23b7-4248-a0a9-31dfb90f074a STEP: Creating configMap with name cm-test-opt-create-70ef89e9-f70c-4825-9f18-8cbd09d54d45 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:57:56.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1522" for this suite. May 8 13:58:20.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:58:20.636: INFO: namespace configmap-1522 deletion completed in 24.112751009s • [SLOW TEST:104.748 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:58:20.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-kgxwt in namespace proxy-2211 I0508 13:58:20.824857 6 runners.go:180] Created replication controller with name: proxy-service-kgxwt, namespace: proxy-2211, replica count: 1 I0508 13:58:21.875522 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 13:58:22.875732 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 13:58:23.875981 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 13:58:24.876211 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 13:58:25.876409 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0508 13:58:26.876639 6 runners.go:180] proxy-service-kgxwt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 13:58:26.880: INFO: setup took 6.145746746s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 8 13:58:26.886: INFO: (0) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 5.482382ms) May 8 13:58:26.890: INFO: (0) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 9.740625ms) May 8 13:58:26.890: INFO: (0) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 9.877702ms) May 8 13:58:26.890: INFO: (0) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 9.931418ms) May 8 13:58:26.891: INFO: (0) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 10.418966ms) May 8 13:58:26.891: INFO: (0) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 10.670092ms) May 8 13:58:26.891: INFO: (0) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 10.550478ms) May 8 13:58:26.891: INFO: (0) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 10.801006ms) May 8 13:58:26.892: INFO: (0) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 11.269337ms) May 8 13:58:26.892: INFO: (0) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 11.588555ms) May 8 13:58:26.892: INFO: (0) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 11.825617ms) May 8 13:58:26.895: INFO: (0) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 2.773487ms) May 8 13:58:26.906: INFO: (1) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.10453ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 4.435379ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.39703ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 4.703425ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.753626ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.928396ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.903475ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 5.142672ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 5.115042ms) May 8 13:58:26.908: INFO: (1) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 5.384125ms) May 8 13:58:26.916: INFO: (2) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 6.069561ms) May 8 13:58:26.916: INFO: (2) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 6.027723ms) May 8 13:58:26.916: INFO: (2) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 6.105994ms) May 8 13:58:26.916: INFO: (2) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 6.129628ms) May 8 13:58:26.916: INFO: (2) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test<... (200; 5.733523ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 5.702088ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 5.79203ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 5.722283ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 5.631198ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 5.74979ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 5.751192ms) May 8 13:58:26.922: INFO: (3) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 5.873638ms) May 8 13:58:26.923: INFO: (3) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 6.721085ms) May 8 13:58:26.927: INFO: (4) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.667285ms) May 8 13:58:26.927: INFO: (4) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 3.887372ms) May 8 13:58:26.927: INFO: (4) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 4.100674ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 4.363518ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 4.29987ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 4.264185ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 4.831841ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.769878ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.84733ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 4.845588ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.819573ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 4.854515ms) May 8 13:58:26.928: INFO: (4) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.895432ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 3.466293ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.504749ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 3.604952ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.575518ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.603623ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.556607ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 3.56625ms) May 8 13:58:26.932: INFO: (5) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.594471ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 4.222544ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 4.427545ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 4.429909ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 4.496512ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.579539ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.897181ms) May 8 13:58:26.933: INFO: (5) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 5.016538ms) May 8 13:58:26.936: INFO: (6) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 2.91231ms) May 8 13:58:26.936: INFO: (6) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 2.875646ms) May 8 13:58:26.936: INFO: (6) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 2.909736ms) May 8 13:58:26.937: INFO: (6) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 3.914131ms) May 8 13:58:26.937: INFO: (6) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 4.01479ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.149837ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 4.373995ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.401967ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.498031ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 4.446469ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 4.537478ms) May 8 13:58:26.938: INFO: (6) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 4.680493ms) May 8 13:58:26.941: INFO: (7) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 2.862239ms) May 8 13:58:26.942: INFO: (7) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.862058ms) May 8 13:58:26.942: INFO: (7) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.969241ms) May 8 13:58:26.942: INFO: (7) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 3.991705ms) May 8 13:58:26.942: INFO: (7) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 4.419397ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 4.347463ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 4.436377ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.567271ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.593533ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.604233ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 4.722269ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 4.815256ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 4.894028ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.924058ms) May 8 13:58:26.943: INFO: (7) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 5.288995ms) May 8 13:58:26.946: INFO: (8) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 2.28935ms) May 8 13:58:26.946: INFO: (8) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 2.281729ms) May 8 13:58:26.946: INFO: (8) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 2.360606ms) May 8 13:58:26.947: INFO: (8) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 3.192496ms) May 8 13:58:26.947: INFO: (8) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.380666ms) May 8 13:58:26.947: INFO: (8) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.554989ms) May 8 13:58:26.947: INFO: (8) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.556544ms) May 8 13:58:26.947: INFO: (8) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 4.060475ms) May 8 13:58:26.949: INFO: (8) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 5.277318ms) May 8 13:58:26.949: INFO: (8) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 5.408205ms) May 8 13:58:26.949: INFO: (8) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 5.367981ms) May 8 13:58:26.973: INFO: (9) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 23.963609ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 24.469099ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 24.33935ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 24.397406ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 24.678284ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 24.804385ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 24.994558ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 25.070377ms) May 8 13:58:26.974: INFO: (9) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 25.642967ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 26.680092ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 27.225371ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 26.906272ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 27.196239ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 27.067002ms) May 8 13:58:26.976: INFO: (9) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 27.330778ms) May 8 13:58:26.981: INFO: (10) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.537238ms) May 8 13:58:26.981: INFO: (10) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.55373ms) May 8 13:58:26.981: INFO: (10) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 4.620791ms) May 8 13:58:26.981: INFO: (10) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 4.589249ms) May 8 13:58:26.981: INFO: (10) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.58869ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 5.031344ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 5.058188ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 5.271793ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 5.464866ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 5.6307ms) May 8 13:58:26.982: INFO: (10) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 5.198376ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 5.176219ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 5.160861ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 5.255684ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 5.158486ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 5.219771ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 5.281694ms) May 8 13:58:26.988: INFO: (11) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test<... (200; 5.206557ms) May 8 13:58:26.990: INFO: (12) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 2.60605ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 2.659461ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 2.825721ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 3.103724ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.131955ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.16561ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 3.256326ms) May 8 13:58:26.991: INFO: (12) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 3.241529ms) May 8 13:58:26.993: INFO: (12) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.829343ms) May 8 13:58:26.993: INFO: (12) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.807392ms) May 8 13:58:26.993: INFO: (12) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test<... (200; 4.425063ms) May 8 13:58:26.998: INFO: (13) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 4.887336ms) May 8 13:58:26.998: INFO: (13) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.903855ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 4.932597ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 4.959633ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 5.021108ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 4.965456ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 4.953144ms) May 8 13:58:26.999: INFO: (13) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 3.117541ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 3.428315ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 3.512789ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.460469ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.529048ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.511158ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 3.565323ms) May 8 13:58:27.003: INFO: (14) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.53988ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 4.248603ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 4.692245ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.631275ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 4.643438ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 4.620888ms) May 8 13:58:27.004: INFO: (14) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 4.672956ms) May 8 13:58:27.007: INFO: (15) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 2.567673ms) May 8 13:58:27.007: INFO: (15) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 2.845094ms) May 8 13:58:27.007: INFO: (15) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 2.873865ms) May 8 13:58:27.007: INFO: (15) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 3.215394ms) May 8 13:58:27.008: INFO: (15) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.47883ms) May 8 13:58:27.008: INFO: (15) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 3.45597ms) May 8 13:58:27.008: INFO: (15) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test<... (200; 4.116823ms) May 8 13:58:27.008: INFO: (15) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 4.086028ms) May 8 13:58:27.008: INFO: (15) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 4.082062ms) May 8 13:58:27.009: INFO: (15) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 4.846063ms) May 8 13:58:27.010: INFO: (15) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 5.278292ms) May 8 13:58:27.012: INFO: (16) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 2.260249ms) May 8 13:58:27.012: INFO: (16) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 2.442043ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 3.996261ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.038703ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 4.071821ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 4.093548ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 4.149194ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 4.169256ms) May 8 13:58:27.014: INFO: (16) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 2.994998ms) May 8 13:58:27.019: INFO: (17) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 3.090855ms) May 8 13:58:27.019: INFO: (17) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 3.108816ms) May 8 13:58:27.019: INFO: (17) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.257727ms) May 8 13:58:27.019: INFO: (17) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 3.271074ms) May 8 13:58:27.019: INFO: (17) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 3.386866ms) May 8 13:58:27.020: INFO: (17) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test (200; 2.017849ms) May 8 13:58:27.030: INFO: (18) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 9.287331ms) May 8 13:58:27.031: INFO: (18) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 9.790918ms) May 8 13:58:27.031: INFO: (18) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:1080/proxy/: ... (200; 9.801619ms) May 8 13:58:27.031: INFO: (18) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 9.924692ms) May 8 13:58:27.032: INFO: (18) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 10.66459ms) May 8 13:58:27.032: INFO: (18) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 10.955156ms) May 8 13:58:27.032: INFO: (18) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 10.90393ms) May 8 13:58:27.032: INFO: (18) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: test<... (200; 11.972545ms) May 8 13:58:27.033: INFO: (18) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 12.20678ms) May 8 13:58:27.034: INFO: (18) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 12.767088ms) May 8 13:58:27.034: INFO: (18) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 12.682117ms) May 8 13:58:27.034: INFO: (18) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 12.960356ms) May 8 13:58:27.037: INFO: (19) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:1080/proxy/: test<... (200; 3.298389ms) May 8 13:58:27.037: INFO: (19) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:462/proxy/: tls qux (200; 3.358001ms) May 8 13:58:27.039: INFO: (19) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:160/proxy/: foo (200; 5.391935ms) May 8 13:58:27.039: INFO: (19) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv/proxy/: test (200; 5.325531ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:443/proxy/: ... (200; 5.756077ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname1/proxy/: foo (200; 5.767499ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/pods/http:proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 5.806354ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname1/proxy/: tls baz (200; 5.758356ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/proxy-service-kgxwt:portname2/proxy/: bar (200; 5.815941ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/pods/proxy-service-kgxwt-tbhwv:162/proxy/: bar (200; 5.883432ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/https:proxy-service-kgxwt:tlsportname2/proxy/: tls qux (200; 5.833906ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname1/proxy/: foo (200; 5.878034ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/pods/https:proxy-service-kgxwt-tbhwv:460/proxy/: tls baz (200; 5.898027ms) May 8 13:58:27.040: INFO: (19) /api/v1/namespaces/proxy-2211/services/http:proxy-service-kgxwt:portname2/proxy/: bar (200; 5.803669ms) STEP: deleting ReplicationController proxy-service-kgxwt in namespace proxy-2211, will wait for the garbage collector to delete the pods May 8 13:58:27.098: INFO: Deleting ReplicationController proxy-service-kgxwt took: 7.028377ms May 8 13:58:27.399: INFO: Terminating ReplicationController proxy-service-kgxwt pods took: 300.214188ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:58:32.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2211" for this suite. May 8 13:58:38.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:58:38.350: INFO: namespace proxy-2211 deletion completed in 6.136850682s • [SLOW TEST:17.714 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:58:38.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-1ded6923-6d44-4325-a672-7c43cc4cb7c2 STEP: Creating a pod to test consume secrets May 8 13:58:38.439: INFO: Waiting up to 5m0s for pod "pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9" in namespace "secrets-6121" to be "success or failure" May 8 13:58:38.444: INFO: Pod "pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608471ms May 8 13:58:40.448: INFO: Pod "pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009059346s May 8 13:58:42.452: INFO: Pod "pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012908069s STEP: Saw pod success May 8 13:58:42.452: INFO: Pod "pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9" satisfied condition "success or failure" May 8 13:58:42.455: INFO: Trying to get logs from node iruya-worker pod pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9 container secret-volume-test: STEP: delete the pod May 8 13:58:42.499: INFO: Waiting for pod pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9 to disappear May 8 13:58:42.516: INFO: Pod pod-secrets-eb19beb5-89b7-47f4-b1b6-e03b8d2cbcc9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:58:42.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6121" for this suite. May 8 13:58:48.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:58:48.632: INFO: namespace secrets-6121 deletion completed in 6.112468576s • [SLOW TEST:10.282 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:58:48.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7587813d-1e36-4afa-aebe-719702bf3dc9 STEP: Creating a pod to test consume secrets May 8 13:58:48.808: INFO: Waiting up to 5m0s for pod "pod-secrets-72574fb4-374c-42db-95be-904baf682538" in namespace "secrets-16" to be "success or failure" May 8 13:58:48.871: INFO: Pod "pod-secrets-72574fb4-374c-42db-95be-904baf682538": Phase="Pending", Reason="", readiness=false. Elapsed: 62.975575ms May 8 13:58:50.875: INFO: Pod "pod-secrets-72574fb4-374c-42db-95be-904baf682538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067020224s May 8 13:58:52.878: INFO: Pod "pod-secrets-72574fb4-374c-42db-95be-904baf682538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07046767s STEP: Saw pod success May 8 13:58:52.878: INFO: Pod "pod-secrets-72574fb4-374c-42db-95be-904baf682538" satisfied condition "success or failure" May 8 13:58:52.881: INFO: Trying to get logs from node iruya-worker pod pod-secrets-72574fb4-374c-42db-95be-904baf682538 container secret-volume-test: STEP: delete the pod May 8 13:58:52.919: INFO: Waiting for pod pod-secrets-72574fb4-374c-42db-95be-904baf682538 to disappear May 8 13:58:52.932: INFO: Pod pod-secrets-72574fb4-374c-42db-95be-904baf682538 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:58:52.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-16" for this suite. May 8 13:58:59.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:58:59.092: INFO: namespace secrets-16 deletion completed in 6.157355193s STEP: Destroying namespace "secret-namespace-5938" for this suite. May 8 13:59:05.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:59:05.189: INFO: namespace secret-namespace-5938 deletion completed in 6.096323264s • [SLOW TEST:16.556 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:59:05.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 13:59:05.293: INFO: Creating deployment "nginx-deployment" May 8 13:59:05.316: INFO: Waiting for observed generation 1 May 8 13:59:07.572: INFO: Waiting for all required pods to come up May 8 13:59:07.686: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 8 13:59:18.081: INFO: Waiting for deployment "nginx-deployment" to complete May 8 13:59:18.086: INFO: Updating deployment "nginx-deployment" with a non-existent image May 8 13:59:18.092: INFO: Updating deployment nginx-deployment May 8 13:59:18.092: INFO: Waiting for observed generation 2 May 8 13:59:20.826: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 8 13:59:20.859: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 8 13:59:20.982: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 8 13:59:21.585: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 8 13:59:21.585: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 8 13:59:21.587: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 8 13:59:21.592: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 8 13:59:21.592: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 8 13:59:21.597: INFO: Updating deployment nginx-deployment May 8 13:59:21.597: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 8 13:59:21.664: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 8 13:59:22.176: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 8 13:59:22.600: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-445,SelfLink:/apis/apps/v1/namespaces/deployment-445/deployments/nginx-deployment,UID:cf4c15e8-7b29-4665-b4c3-30ee62419bbb,ResourceVersion:9719306,Generation:3,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-08 13:59:20 +0000 UTC 2020-05-08 13:59:05 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-08 13:59:21 +0000 UTC 2020-05-08 13:59:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 8 13:59:23.168: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-445,SelfLink:/apis/apps/v1/namespaces/deployment-445/replicasets/nginx-deployment-55fb7cb77f,UID:19ff130c-1fdb-456a-8843-46d8a72372b9,ResourceVersion:9719352,Generation:3,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cf4c15e8-7b29-4665-b4c3-30ee62419bbb 0xc002f2ff27 0xc002f2ff28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 13:59:23.168: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 8 13:59:23.168: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-445,SelfLink:/apis/apps/v1/namespaces/deployment-445/replicasets/nginx-deployment-7b8c6f4498,UID:1e0d98d2-c3df-4bd3-946e-4b7a61d465c6,ResourceVersion:9719333,Generation:3,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cf4c15e8-7b29-4665-b4c3-30ee62419bbb 0xc002f2fff7 0xc002f2fff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 8 13:59:24.309: INFO: Pod "nginx-deployment-55fb7cb77f-4vhw5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4vhw5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-4vhw5,UID:7d3d214f-fe6c-4aa3-b0e9-8a4f9946beb0,ResourceVersion:9719285,Generation:0,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094957 0xc003094958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030949d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030949f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-08 13:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.310: INFO: Pod "nginx-deployment-55fb7cb77f-54p55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-54p55,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-54p55,UID:99450021-b526-4cd0-a1b8-fbb8ab193758,ResourceVersion:9719305,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094ac0 0xc003094ac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003094b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003094b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.310: INFO: Pod "nginx-deployment-55fb7cb77f-5jprq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5jprq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-5jprq,UID:201c6c44-01d5-4cd2-aad8-c621639376e0,ResourceVersion:9719265,Generation:0,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094be7 0xc003094be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003094c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003094c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-08 13:59:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.310: INFO: Pod "nginx-deployment-55fb7cb77f-6k2dw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6k2dw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-6k2dw,UID:c093853e-1bad-4d68-a5c7-3fc78d4d1a0f,ResourceVersion:9719290,Generation:0,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094d50 0xc003094d51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003094dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003094df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-08 13:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.310: INFO: Pod "nginx-deployment-55fb7cb77f-6n6cd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6n6cd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-6n6cd,UID:edc22424-87ab-4b01-8874-f032fc420fde,ResourceVersion:9719340,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094ec0 0xc003094ec1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003094f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003094f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.310: INFO: Pod "nginx-deployment-55fb7cb77f-6rjvv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6rjvv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-6rjvv,UID:d5a2e0b5-0832-4e21-a23f-848ec4fd5a08,ResourceVersion:9719323,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003094fe7 0xc003094fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095060} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.311: INFO: Pod "nginx-deployment-55fb7cb77f-7msxg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7msxg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-7msxg,UID:e27f7690-bd69-443f-a7e9-c70ebbd8740b,ResourceVersion:9719357,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095107 0xc003095108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030951a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-08 13:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.311: INFO: Pod "nginx-deployment-55fb7cb77f-b5b2m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b5b2m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-b5b2m,UID:605d1d25-3ef4-457f-af79-351b1597038e,ResourceVersion:9719343,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095270 0xc003095271}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030952f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.311: INFO: Pod "nginx-deployment-55fb7cb77f-cq986" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cq986,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-cq986,UID:807d51c2-9810-428a-8542-d2fc6b3f1ddf,ResourceVersion:9719259,Generation:0,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095397 0xc003095398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095410} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-08 13:59:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.311: INFO: Pod "nginx-deployment-55fb7cb77f-fsbw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fsbw8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-fsbw8,UID:679fee94-3add-47eb-8805-3de7705c292e,ResourceVersion:9719348,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095500 0xc003095501}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095580} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030955a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.311: INFO: Pod "nginx-deployment-55fb7cb77f-sprb6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sprb6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-sprb6,UID:6a48b8e4-47c8-4eb6-bf04-220afa6f5354,ResourceVersion:9719341,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095627 0xc003095628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030956a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030956c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.312: INFO: Pod "nginx-deployment-55fb7cb77f-v6nwl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v6nwl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-v6nwl,UID:ab1771bf-8827-4771-baee-1ce11eeddce5,ResourceVersion:9719344,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095747 0xc003095748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030957c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030957e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.312: INFO: Pod "nginx-deployment-55fb7cb77f-zmfxf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zmfxf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-55fb7cb77f-zmfxf,UID:1138bf26-1ec7-4585-b417-2d200d359326,ResourceVersion:9719256,Generation:0,CreationTimestamp:2020-05-08 13:59:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 19ff130c-1fdb-456a-8843-46d8a72372b9 0xc003095867 0xc003095868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030958e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-08 13:59:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.312: INFO: Pod "nginx-deployment-7b8c6f4498-bn6ch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bn6ch,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-bn6ch,UID:d5062dd7-809c-48af-bc08-64c466884094,ResourceVersion:9719327,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0030959d0 0xc0030959d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.312: INFO: Pod "nginx-deployment-7b8c6f4498-bqwgn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bqwgn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-bqwgn,UID:d0f8e897-909d-4ea6-b7ac-ff719f1f959b,ResourceVersion:9719334,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc003095ae7 0xc003095ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.312: INFO: Pod "nginx-deployment-7b8c6f4498-fpsqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fpsqt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-fpsqt,UID:419a7c1b-6df1-498c-bf26-a35516a587c8,ResourceVersion:9719337,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc003095c07 0xc003095c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-hbdlq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hbdlq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-hbdlq,UID:81161916-c918-4ce7-9c75-8b5a5dbd6197,ResourceVersion:9719338,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc003095d27 0xc003095d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-hfhfj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hfhfj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-hfhfj,UID:70dd0f7d-b8f9-4d68-89c5-bb6cb91b60a3,ResourceVersion:9719216,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc003095e47 0xc003095e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003095ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003095ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.154,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5fd42ee495bb3a68d40e24a768737ed05f732f54787922af5ddcdc0114e246f8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-hjj2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hjj2j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-hjj2j,UID:2bc741ca-eae4-4a31-bcfe-01bf94c76cd9,ResourceVersion:9719326,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc003095fb7 0xc003095fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-ht2bq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ht2bq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-ht2bq,UID:ac4c7314-f9d3-4fd0-ad0b-95d1a774ac93,ResourceVersion:9719302,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae0d7 0xc0032ae0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-jtkh2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jtkh2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-jtkh2,UID:12f6d6f5-f4d9-4305-85c3-bf6cb9b2c239,ResourceVersion:9719219,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae1f7 0xc0032ae1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae270} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.52,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://92d1bd083da121bccdef051d68d0e26544574b8e6877b62feeaf667512ad351e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.313: INFO: Pod "nginx-deployment-7b8c6f4498-kmblh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kmblh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-kmblh,UID:aab3d6d7-3b2f-4d78-acf0-58ac71146901,ResourceVersion:9719335,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae367 0xc0032ae368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-m8tb2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m8tb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-m8tb2,UID:214c0d71-b626-4e2e-a1f1-a891d71ae321,ResourceVersion:9719222,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae487 0xc0032ae488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae500} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.51,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1bb8d3e35e8872e86c9142f9cdd9181784d173b20ad67c3042ba2217cf027ced}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-mv5z8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mv5z8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-mv5z8,UID:00d34372-6e50-4e06-bf26-22ac50f04fd2,ResourceVersion:9719209,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae5f7 0xc0032ae5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae670} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.155,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d0bffc63816b5a8bc319ad5b9f36f110a8eb7dda892b904f8eb9d0d4a9fed6e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-n2472" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n2472,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-n2472,UID:741aa4df-fde7-49a6-ab07-bbd33ba06ac1,ResourceVersion:9719170,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae767 0xc0032ae768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.48,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://53e1024ecbec6a6035dd31f6e4918d19bada340bb1045d52131879d873540bd9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-pj4qs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pj4qs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-pj4qs,UID:8edaa691-22bd-4787-a3ea-da59a8316471,ResourceVersion:9719325,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae8d7 0xc0032ae8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032ae950} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032ae970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-ptbwg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ptbwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-ptbwg,UID:befe9772-4642-442f-a225-8564204bbc26,ResourceVersion:9719304,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032ae9f7 0xc0032ae9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032aea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032aea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-s6sbf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s6sbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-s6sbf,UID:dd3629f4-4a03-4900-9c4f-74b9db4f499e,ResourceVersion:9719198,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032aeb17 0xc0032aeb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032aeb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032aebb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.50,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c7d66d49efa1448d0b1acd218d0b566ea5b54a43a17e8d7144a9aca2840b836c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.314: INFO: Pod "nginx-deployment-7b8c6f4498-sc7tp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sc7tp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-sc7tp,UID:d41bbb25-6c94-4383-a672-c99257abd956,ResourceVersion:9719186,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032aec87 0xc0032aec88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032aed00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032aed20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.49,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://284cc17afe3e4636a1aed9f0ebd2c31116d044aab260339dd2ec39c42deb07bc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.315: INFO: Pod "nginx-deployment-7b8c6f4498-sfj7p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sfj7p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-sfj7p,UID:bbe6b4c0-4fb2-424a-b574-5be9fe3f1301,ResourceVersion:9719194,Generation:0,CreationTimestamp:2020-05-08 13:59:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032aedf7 0xc0032aedf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032aee70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032aee90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.153,StartTime:2020-05-08 13:59:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-08 13:59:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://41c2fe26d20096a67c000f1e71b0fec3b1a7275cea1fae22588a10823b38d09a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.315: INFO: Pod "nginx-deployment-7b8c6f4498-swqrr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-swqrr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-swqrr,UID:68d35b5e-8021-40e9-8ad9-6339459dfb8d,ResourceVersion:9719336,Generation:0,CreationTimestamp:2020-05-08 13:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032aef67 0xc0032aef68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032aefe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032af000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.315: INFO: Pod "nginx-deployment-7b8c6f4498-t5fw4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t5fw4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-t5fw4,UID:f447ea06-d0e9-4a79-8b9e-82611f85ea5f,ResourceVersion:9719351,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032af087 0xc0032af088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032af100} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032af120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-08 13:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 8 13:59:24.315: INFO: Pod "nginx-deployment-7b8c6f4498-w56gq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w56gq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-445,SelfLink:/api/v1/namespaces/deployment-445/pods/nginx-deployment-7b8c6f4498-w56gq,UID:2718793b-f453-47bc-af5c-abe1628764b0,ResourceVersion:9719329,Generation:0,CreationTimestamp:2020-05-08 13:59:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 1e0d98d2-c3df-4bd3-946e-4b7a61d465c6 0xc0032af1e7 0xc0032af1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djpdt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djpdt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-djpdt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032af260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0032af280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 13:59:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:59:24.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-445" for this suite. May 8 13:59:49.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 13:59:49.185: INFO: namespace deployment-445 deletion completed in 24.55282356s • [SLOW TEST:43.996 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 13:59:49.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 8 13:59:49.641: INFO: Waiting up to 5m0s for pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78" in namespace "emptydir-2636" to be "success or failure" May 8 13:59:49.651: INFO: Pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78": Phase="Pending", Reason="", readiness=false. Elapsed: 9.779544ms May 8 13:59:51.654: INFO: Pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012980723s May 8 13:59:53.658: INFO: Pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017517887s May 8 13:59:55.663: INFO: Pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022168892s STEP: Saw pod success May 8 13:59:55.663: INFO: Pod "pod-546855cb-4791-4ccb-b634-2e5a61bb0d78" satisfied condition "success or failure" May 8 13:59:55.666: INFO: Trying to get logs from node iruya-worker pod pod-546855cb-4791-4ccb-b634-2e5a61bb0d78 container test-container: STEP: delete the pod May 8 13:59:55.713: INFO: Waiting for pod pod-546855cb-4791-4ccb-b634-2e5a61bb0d78 to disappear May 8 13:59:55.779: INFO: Pod pod-546855cb-4791-4ccb-b634-2e5a61bb0d78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 13:59:55.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2636" for this suite. May 8 14:00:01.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:00:01.895: INFO: namespace emptydir-2636 deletion completed in 6.11166961s • [SLOW TEST:12.709 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:00:01.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 8 14:00:02.257: INFO: Waiting up to 5m0s for pod "downward-api-0389520c-df2b-4d36-a80f-c2fa78704905" in namespace "downward-api-3911" to be "success or failure" May 8 14:00:02.306: INFO: Pod "downward-api-0389520c-df2b-4d36-a80f-c2fa78704905": Phase="Pending", Reason="", readiness=false. Elapsed: 49.516394ms May 8 14:00:04.310: INFO: Pod "downward-api-0389520c-df2b-4d36-a80f-c2fa78704905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052950265s May 8 14:00:06.313: INFO: Pod "downward-api-0389520c-df2b-4d36-a80f-c2fa78704905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056586318s STEP: Saw pod success May 8 14:00:06.313: INFO: Pod "downward-api-0389520c-df2b-4d36-a80f-c2fa78704905" satisfied condition "success or failure" May 8 14:00:06.316: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0389520c-df2b-4d36-a80f-c2fa78704905 container dapi-container: STEP: delete the pod May 8 14:00:06.387: INFO: Waiting for pod downward-api-0389520c-df2b-4d36-a80f-c2fa78704905 to disappear May 8 14:00:06.444: INFO: Pod downward-api-0389520c-df2b-4d36-a80f-c2fa78704905 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:00:06.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3911" for this suite. May 8 14:00:12.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:00:12.546: INFO: namespace downward-api-3911 deletion completed in 6.097482929s • [SLOW TEST:10.651 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:00:12.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 14:00:12.639: INFO: Waiting up to 5m0s for pod "pod-4135afca-68c2-4441-b2b2-799d7bb77889" in namespace "emptydir-1902" to be "success or failure" May 8 14:00:12.642: INFO: Pod "pod-4135afca-68c2-4441-b2b2-799d7bb77889": Phase="Pending", Reason="", readiness=false. Elapsed: 3.39076ms May 8 14:00:14.647: INFO: Pod "pod-4135afca-68c2-4441-b2b2-799d7bb77889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008252356s May 8 14:00:16.651: INFO: Pod "pod-4135afca-68c2-4441-b2b2-799d7bb77889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01282926s STEP: Saw pod success May 8 14:00:16.651: INFO: Pod "pod-4135afca-68c2-4441-b2b2-799d7bb77889" satisfied condition "success or failure" May 8 14:00:16.655: INFO: Trying to get logs from node iruya-worker2 pod pod-4135afca-68c2-4441-b2b2-799d7bb77889 container test-container: STEP: delete the pod May 8 14:00:16.694: INFO: Waiting for pod pod-4135afca-68c2-4441-b2b2-799d7bb77889 to disappear May 8 14:00:16.722: INFO: Pod pod-4135afca-68c2-4441-b2b2-799d7bb77889 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:00:16.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1902" for this suite. May 8 14:00:22.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:00:22.824: INFO: namespace emptydir-1902 deletion completed in 6.097679827s • [SLOW TEST:10.278 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:00:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 8 14:00:22.946: INFO: Waiting up to 5m0s for pod "pod-fbfac994-67db-4931-b91b-dd306fd90423" in namespace "emptydir-7787" to be "success or failure" May 8 14:00:22.957: INFO: Pod "pod-fbfac994-67db-4931-b91b-dd306fd90423": Phase="Pending", Reason="", readiness=false. Elapsed: 10.358469ms May 8 14:00:24.960: INFO: Pod "pod-fbfac994-67db-4931-b91b-dd306fd90423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013755456s May 8 14:00:26.964: INFO: Pod "pod-fbfac994-67db-4931-b91b-dd306fd90423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017873822s STEP: Saw pod success May 8 14:00:26.964: INFO: Pod "pod-fbfac994-67db-4931-b91b-dd306fd90423" satisfied condition "success or failure" May 8 14:00:26.968: INFO: Trying to get logs from node iruya-worker pod pod-fbfac994-67db-4931-b91b-dd306fd90423 container test-container: STEP: delete the pod May 8 14:00:27.002: INFO: Waiting for pod pod-fbfac994-67db-4931-b91b-dd306fd90423 to disappear May 8 14:00:27.010: INFO: Pod pod-fbfac994-67db-4931-b91b-dd306fd90423 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:00:27.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7787" for this suite. May 8 14:00:33.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:00:33.124: INFO: namespace emptydir-7787 deletion completed in 6.110190068s • [SLOW TEST:10.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:00:33.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-52a3f795-d3c2-4007-9d5e-cf5123fa2f50 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:00:39.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-962" for this suite. May 8 14:01:01.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:01:01.401: INFO: namespace configmap-962 deletion completed in 22.138442229s • [SLOW TEST:28.277 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:01:01.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 8 14:01:11.587: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:11.587: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:11.639507 6 log.go:172] (0xc0013228f0) (0xc0030b88c0) Create stream I0508 14:01:11.639544 6 log.go:172] (0xc0013228f0) (0xc0030b88c0) Stream added, broadcasting: 1 I0508 14:01:11.641732 6 log.go:172] (0xc0013228f0) Reply frame received for 1 I0508 14:01:11.641777 6 log.go:172] (0xc0013228f0) (0xc0018e7ea0) Create stream I0508 14:01:11.641792 6 log.go:172] (0xc0013228f0) (0xc0018e7ea0) Stream added, broadcasting: 3 I0508 14:01:11.642731 6 log.go:172] (0xc0013228f0) Reply frame received for 3 I0508 14:01:11.642783 6 log.go:172] (0xc0013228f0) (0xc001c220a0) Create stream I0508 14:01:11.642798 6 log.go:172] (0xc0013228f0) (0xc001c220a0) Stream added, broadcasting: 5 I0508 14:01:11.643769 6 log.go:172] (0xc0013228f0) Reply frame received for 5 I0508 14:01:11.729462 6 log.go:172] (0xc0013228f0) Data frame received for 3 I0508 14:01:11.729491 6 log.go:172] (0xc0018e7ea0) (3) Data frame handling I0508 14:01:11.729501 6 log.go:172] (0xc0018e7ea0) (3) Data frame sent I0508 14:01:11.729507 6 log.go:172] (0xc0013228f0) Data frame received for 3 I0508 14:01:11.729511 6 log.go:172] (0xc0018e7ea0) (3) Data frame handling I0508 14:01:11.729562 6 log.go:172] (0xc0013228f0) Data frame received for 5 I0508 14:01:11.729572 6 log.go:172] (0xc001c220a0) (5) Data frame handling I0508 14:01:11.730990 6 log.go:172] (0xc0013228f0) Data frame received for 1 I0508 14:01:11.731036 6 log.go:172] (0xc0030b88c0) (1) Data frame handling I0508 14:01:11.731051 6 log.go:172] (0xc0030b88c0) (1) Data frame sent I0508 14:01:11.731067 6 log.go:172] (0xc0013228f0) (0xc0030b88c0) Stream removed, broadcasting: 1 I0508 14:01:11.731208 6 log.go:172] (0xc0013228f0) (0xc0030b88c0) Stream removed, broadcasting: 1 I0508 14:01:11.731230 6 log.go:172] (0xc0013228f0) (0xc0018e7ea0) Stream removed, broadcasting: 3 I0508 14:01:11.731444 6 log.go:172] (0xc0013228f0) (0xc001c220a0) Stream removed, broadcasting: 5 May 8 14:01:11.731: INFO: Exec stderr: "" May 8 14:01:11.731: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:11.731: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:11.734222 6 log.go:172] (0xc0013228f0) Go away received I0508 14:01:11.764171 6 log.go:172] (0xc001323290) (0xc0030b8a00) Create stream I0508 14:01:11.764197 6 log.go:172] (0xc001323290) (0xc0030b8a00) Stream added, broadcasting: 1 I0508 14:01:11.766053 6 log.go:172] (0xc001323290) Reply frame received for 1 I0508 14:01:11.766105 6 log.go:172] (0xc001323290) (0xc002909b80) Create stream I0508 14:01:11.766114 6 log.go:172] (0xc001323290) (0xc002909b80) Stream added, broadcasting: 3 I0508 14:01:11.766816 6 log.go:172] (0xc001323290) Reply frame received for 3 I0508 14:01:11.766841 6 log.go:172] (0xc001323290) (0xc002978140) Create stream I0508 14:01:11.766856 6 log.go:172] (0xc001323290) (0xc002978140) Stream added, broadcasting: 5 I0508 14:01:11.767556 6 log.go:172] (0xc001323290) Reply frame received for 5 I0508 14:01:11.822713 6 log.go:172] (0xc001323290) Data frame received for 5 I0508 14:01:11.822749 6 log.go:172] (0xc002978140) (5) Data frame handling I0508 14:01:11.822782 6 log.go:172] (0xc001323290) Data frame received for 3 I0508 14:01:11.822797 6 log.go:172] (0xc002909b80) (3) Data frame handling I0508 14:01:11.822812 6 log.go:172] (0xc002909b80) (3) Data frame sent I0508 14:01:11.822829 6 log.go:172] (0xc001323290) Data frame received for 3 I0508 14:01:11.822834 6 log.go:172] (0xc002909b80) (3) Data frame handling I0508 14:01:11.824268 6 log.go:172] (0xc001323290) Data frame received for 1 I0508 14:01:11.824287 6 log.go:172] (0xc0030b8a00) (1) Data frame handling I0508 14:01:11.824300 6 log.go:172] (0xc0030b8a00) (1) Data frame sent I0508 14:01:11.824314 6 log.go:172] (0xc001323290) (0xc0030b8a00) Stream removed, broadcasting: 1 I0508 14:01:11.824370 6 log.go:172] (0xc001323290) Go away received I0508 14:01:11.824453 6 log.go:172] (0xc001323290) (0xc0030b8a00) Stream removed, broadcasting: 1 I0508 14:01:11.824513 6 log.go:172] (0xc001323290) (0xc002909b80) Stream removed, broadcasting: 3 I0508 14:01:11.824541 6 log.go:172] (0xc001323290) (0xc002978140) Stream removed, broadcasting: 5 May 8 14:01:11.824: INFO: Exec stderr: "" May 8 14:01:11.824: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:11.824: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:11.849304 6 log.go:172] (0xc0025e4420) (0xc001c22500) Create stream I0508 14:01:11.849326 6 log.go:172] (0xc0025e4420) (0xc001c22500) Stream added, broadcasting: 1 I0508 14:01:11.851341 6 log.go:172] (0xc0025e4420) Reply frame received for 1 I0508 14:01:11.851381 6 log.go:172] (0xc0025e4420) (0xc002f46320) Create stream I0508 14:01:11.851394 6 log.go:172] (0xc0025e4420) (0xc002f46320) Stream added, broadcasting: 3 I0508 14:01:11.852427 6 log.go:172] (0xc0025e4420) Reply frame received for 3 I0508 14:01:11.852462 6 log.go:172] (0xc0025e4420) (0xc002f463c0) Create stream I0508 14:01:11.852475 6 log.go:172] (0xc0025e4420) (0xc002f463c0) Stream added, broadcasting: 5 I0508 14:01:11.853661 6 log.go:172] (0xc0025e4420) Reply frame received for 5 I0508 14:01:11.919003 6 log.go:172] (0xc0025e4420) Data frame received for 3 I0508 14:01:11.919074 6 log.go:172] (0xc002f46320) (3) Data frame handling I0508 14:01:11.919092 6 log.go:172] (0xc002f46320) (3) Data frame sent I0508 14:01:11.919107 6 log.go:172] (0xc0025e4420) Data frame received for 3 I0508 14:01:11.919117 6 log.go:172] (0xc002f46320) (3) Data frame handling I0508 14:01:11.919139 6 log.go:172] (0xc0025e4420) Data frame received for 5 I0508 14:01:11.919154 6 log.go:172] (0xc002f463c0) (5) Data frame handling I0508 14:01:11.920417 6 log.go:172] (0xc0025e4420) Data frame received for 1 I0508 14:01:11.920447 6 log.go:172] (0xc001c22500) (1) Data frame handling I0508 14:01:11.920472 6 log.go:172] (0xc001c22500) (1) Data frame sent I0508 14:01:11.920499 6 log.go:172] (0xc0025e4420) (0xc001c22500) Stream removed, broadcasting: 1 I0508 14:01:11.920521 6 log.go:172] (0xc0025e4420) Go away received I0508 14:01:11.920647 6 log.go:172] (0xc0025e4420) (0xc001c22500) Stream removed, broadcasting: 1 I0508 14:01:11.920677 6 log.go:172] (0xc0025e4420) (0xc002f46320) Stream removed, broadcasting: 3 I0508 14:01:11.920702 6 log.go:172] (0xc0025e4420) (0xc002f463c0) Stream removed, broadcasting: 5 May 8 14:01:11.920: INFO: Exec stderr: "" May 8 14:01:11.920: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:11.920: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:11.952139 6 log.go:172] (0xc0025e4fd0) (0xc001c228c0) Create stream I0508 14:01:11.952174 6 log.go:172] (0xc0025e4fd0) (0xc001c228c0) Stream added, broadcasting: 1 I0508 14:01:11.955089 6 log.go:172] (0xc0025e4fd0) Reply frame received for 1 I0508 14:01:11.955146 6 log.go:172] (0xc0025e4fd0) (0xc0030b8aa0) Create stream I0508 14:01:11.955170 6 log.go:172] (0xc0025e4fd0) (0xc0030b8aa0) Stream added, broadcasting: 3 I0508 14:01:11.957293 6 log.go:172] (0xc0025e4fd0) Reply frame received for 3 I0508 14:01:11.957372 6 log.go:172] (0xc0025e4fd0) (0xc0029781e0) Create stream I0508 14:01:11.957407 6 log.go:172] (0xc0025e4fd0) (0xc0029781e0) Stream added, broadcasting: 5 I0508 14:01:11.960106 6 log.go:172] (0xc0025e4fd0) Reply frame received for 5 I0508 14:01:12.013619 6 log.go:172] (0xc0025e4fd0) Data frame received for 5 I0508 14:01:12.013656 6 log.go:172] (0xc0029781e0) (5) Data frame handling I0508 14:01:12.013678 6 log.go:172] (0xc0025e4fd0) Data frame received for 3 I0508 14:01:12.013689 6 log.go:172] (0xc0030b8aa0) (3) Data frame handling I0508 14:01:12.013702 6 log.go:172] (0xc0030b8aa0) (3) Data frame sent I0508 14:01:12.013712 6 log.go:172] (0xc0025e4fd0) Data frame received for 3 I0508 14:01:12.013721 6 log.go:172] (0xc0030b8aa0) (3) Data frame handling I0508 14:01:12.014888 6 log.go:172] (0xc0025e4fd0) Data frame received for 1 I0508 14:01:12.014925 6 log.go:172] (0xc001c228c0) (1) Data frame handling I0508 14:01:12.014937 6 log.go:172] (0xc001c228c0) (1) Data frame sent I0508 14:01:12.014961 6 log.go:172] (0xc0025e4fd0) (0xc001c228c0) Stream removed, broadcasting: 1 I0508 14:01:12.014988 6 log.go:172] (0xc0025e4fd0) Go away received I0508 14:01:12.015073 6 log.go:172] (0xc0025e4fd0) (0xc001c228c0) Stream removed, broadcasting: 1 I0508 14:01:12.015100 6 log.go:172] (0xc0025e4fd0) (0xc0030b8aa0) Stream removed, broadcasting: 3 I0508 14:01:12.015114 6 log.go:172] (0xc0025e4fd0) (0xc0029781e0) Stream removed, broadcasting: 5 May 8 14:01:12.015: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 8 14:01:12.015: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.015: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.048169 6 log.go:172] (0xc0025e58c0) (0xc001c22c80) Create stream I0508 14:01:12.048207 6 log.go:172] (0xc0025e58c0) (0xc001c22c80) Stream added, broadcasting: 1 I0508 14:01:12.050722 6 log.go:172] (0xc0025e58c0) Reply frame received for 1 I0508 14:01:12.050780 6 log.go:172] (0xc0025e58c0) (0xc001c22d20) Create stream I0508 14:01:12.050796 6 log.go:172] (0xc0025e58c0) (0xc001c22d20) Stream added, broadcasting: 3 I0508 14:01:12.051737 6 log.go:172] (0xc0025e58c0) Reply frame received for 3 I0508 14:01:12.051768 6 log.go:172] (0xc0025e58c0) (0xc001c22dc0) Create stream I0508 14:01:12.051780 6 log.go:172] (0xc0025e58c0) (0xc001c22dc0) Stream added, broadcasting: 5 I0508 14:01:12.052806 6 log.go:172] (0xc0025e58c0) Reply frame received for 5 I0508 14:01:12.117104 6 log.go:172] (0xc0025e58c0) Data frame received for 5 I0508 14:01:12.117326 6 log.go:172] (0xc001c22dc0) (5) Data frame handling I0508 14:01:12.117357 6 log.go:172] (0xc0025e58c0) Data frame received for 3 I0508 14:01:12.117371 6 log.go:172] (0xc001c22d20) (3) Data frame handling I0508 14:01:12.117390 6 log.go:172] (0xc001c22d20) (3) Data frame sent I0508 14:01:12.117403 6 log.go:172] (0xc0025e58c0) Data frame received for 3 I0508 14:01:12.117408 6 log.go:172] (0xc001c22d20) (3) Data frame handling I0508 14:01:12.118680 6 log.go:172] (0xc0025e58c0) Data frame received for 1 I0508 14:01:12.118722 6 log.go:172] (0xc001c22c80) (1) Data frame handling I0508 14:01:12.118746 6 log.go:172] (0xc001c22c80) (1) Data frame sent I0508 14:01:12.118762 6 log.go:172] (0xc0025e58c0) (0xc001c22c80) Stream removed, broadcasting: 1 I0508 14:01:12.118782 6 log.go:172] (0xc0025e58c0) Go away received I0508 14:01:12.118896 6 log.go:172] (0xc0025e58c0) (0xc001c22c80) Stream removed, broadcasting: 1 I0508 14:01:12.118917 6 log.go:172] (0xc0025e58c0) (0xc001c22d20) Stream removed, broadcasting: 3 I0508 14:01:12.118923 6 log.go:172] (0xc0025e58c0) (0xc001c22dc0) Stream removed, broadcasting: 5 May 8 14:01:12.118: INFO: Exec stderr: "" May 8 14:01:12.118: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.118: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.147634 6 log.go:172] (0xc001c3bef0) (0xc00246a000) Create stream I0508 14:01:12.147676 6 log.go:172] (0xc001c3bef0) (0xc00246a000) Stream added, broadcasting: 1 I0508 14:01:12.150370 6 log.go:172] (0xc001c3bef0) Reply frame received for 1 I0508 14:01:12.150420 6 log.go:172] (0xc001c3bef0) (0xc002f46460) Create stream I0508 14:01:12.150438 6 log.go:172] (0xc001c3bef0) (0xc002f46460) Stream added, broadcasting: 3 I0508 14:01:12.151390 6 log.go:172] (0xc001c3bef0) Reply frame received for 3 I0508 14:01:12.151413 6 log.go:172] (0xc001c3bef0) (0xc002f465a0) Create stream I0508 14:01:12.151419 6 log.go:172] (0xc001c3bef0) (0xc002f465a0) Stream added, broadcasting: 5 I0508 14:01:12.152495 6 log.go:172] (0xc001c3bef0) Reply frame received for 5 I0508 14:01:12.210379 6 log.go:172] (0xc001c3bef0) Data frame received for 5 I0508 14:01:12.210415 6 log.go:172] (0xc002f465a0) (5) Data frame handling I0508 14:01:12.210431 6 log.go:172] (0xc001c3bef0) Data frame received for 3 I0508 14:01:12.210435 6 log.go:172] (0xc002f46460) (3) Data frame handling I0508 14:01:12.210461 6 log.go:172] (0xc002f46460) (3) Data frame sent I0508 14:01:12.210476 6 log.go:172] (0xc001c3bef0) Data frame received for 3 I0508 14:01:12.210481 6 log.go:172] (0xc002f46460) (3) Data frame handling I0508 14:01:12.211568 6 log.go:172] (0xc001c3bef0) Data frame received for 1 I0508 14:01:12.211597 6 log.go:172] (0xc00246a000) (1) Data frame handling I0508 14:01:12.211676 6 log.go:172] (0xc00246a000) (1) Data frame sent I0508 14:01:12.211702 6 log.go:172] (0xc001c3bef0) (0xc00246a000) Stream removed, broadcasting: 1 I0508 14:01:12.211720 6 log.go:172] (0xc001c3bef0) Go away received I0508 14:01:12.211880 6 log.go:172] (0xc001c3bef0) (0xc00246a000) Stream removed, broadcasting: 1 I0508 14:01:12.211910 6 log.go:172] (0xc001c3bef0) (0xc002f46460) Stream removed, broadcasting: 3 I0508 14:01:12.211927 6 log.go:172] (0xc001c3bef0) (0xc002f465a0) Stream removed, broadcasting: 5 May 8 14:01:12.211: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 8 14:01:12.212: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.212: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.241278 6 log.go:172] (0xc0031dedc0) (0xc002978500) Create stream I0508 14:01:12.241305 6 log.go:172] (0xc0031dedc0) (0xc002978500) Stream added, broadcasting: 1 I0508 14:01:12.249714 6 log.go:172] (0xc0031dedc0) Reply frame received for 1 I0508 14:01:12.249757 6 log.go:172] (0xc0031dedc0) (0xc00058a000) Create stream I0508 14:01:12.249769 6 log.go:172] (0xc0031dedc0) (0xc00058a000) Stream added, broadcasting: 3 I0508 14:01:12.250476 6 log.go:172] (0xc0031dedc0) Reply frame received for 3 I0508 14:01:12.250502 6 log.go:172] (0xc0031dedc0) (0xc0003b0000) Create stream I0508 14:01:12.250511 6 log.go:172] (0xc0031dedc0) (0xc0003b0000) Stream added, broadcasting: 5 I0508 14:01:12.251258 6 log.go:172] (0xc0031dedc0) Reply frame received for 5 I0508 14:01:12.300149 6 log.go:172] (0xc0031dedc0) Data frame received for 5 I0508 14:01:12.300175 6 log.go:172] (0xc0003b0000) (5) Data frame handling I0508 14:01:12.300249 6 log.go:172] (0xc0031dedc0) Data frame received for 3 I0508 14:01:12.300279 6 log.go:172] (0xc00058a000) (3) Data frame handling I0508 14:01:12.300302 6 log.go:172] (0xc00058a000) (3) Data frame sent I0508 14:01:12.300314 6 log.go:172] (0xc0031dedc0) Data frame received for 3 I0508 14:01:12.300326 6 log.go:172] (0xc00058a000) (3) Data frame handling I0508 14:01:12.302390 6 log.go:172] (0xc0031dedc0) Data frame received for 1 I0508 14:01:12.302417 6 log.go:172] (0xc002978500) (1) Data frame handling I0508 14:01:12.302434 6 log.go:172] (0xc002978500) (1) Data frame sent I0508 14:01:12.302598 6 log.go:172] (0xc0031dedc0) (0xc002978500) Stream removed, broadcasting: 1 I0508 14:01:12.302662 6 log.go:172] (0xc0031dedc0) Go away received I0508 14:01:12.302951 6 log.go:172] (0xc0031dedc0) (0xc002978500) Stream removed, broadcasting: 1 I0508 14:01:12.302982 6 log.go:172] (0xc0031dedc0) (0xc00058a000) Stream removed, broadcasting: 3 I0508 14:01:12.303002 6 log.go:172] (0xc0031dedc0) (0xc0003b0000) Stream removed, broadcasting: 5 May 8 14:01:12.303: INFO: Exec stderr: "" May 8 14:01:12.303: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.303: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.336564 6 log.go:172] (0xc0007e6f20) (0xc0003b0dc0) Create stream I0508 14:01:12.336596 6 log.go:172] (0xc0007e6f20) (0xc0003b0dc0) Stream added, broadcasting: 1 I0508 14:01:12.338871 6 log.go:172] (0xc0007e6f20) Reply frame received for 1 I0508 14:01:12.338936 6 log.go:172] (0xc0007e6f20) (0xc002908000) Create stream I0508 14:01:12.338963 6 log.go:172] (0xc0007e6f20) (0xc002908000) Stream added, broadcasting: 3 I0508 14:01:12.340002 6 log.go:172] (0xc0007e6f20) Reply frame received for 3 I0508 14:01:12.340032 6 log.go:172] (0xc0007e6f20) (0xc002ce6000) Create stream I0508 14:01:12.340048 6 log.go:172] (0xc0007e6f20) (0xc002ce6000) Stream added, broadcasting: 5 I0508 14:01:12.340929 6 log.go:172] (0xc0007e6f20) Reply frame received for 5 I0508 14:01:12.409549 6 log.go:172] (0xc0007e6f20) Data frame received for 5 I0508 14:01:12.409592 6 log.go:172] (0xc002ce6000) (5) Data frame handling I0508 14:01:12.409654 6 log.go:172] (0xc0007e6f20) Data frame received for 3 I0508 14:01:12.409680 6 log.go:172] (0xc002908000) (3) Data frame handling I0508 14:01:12.409709 6 log.go:172] (0xc002908000) (3) Data frame sent I0508 14:01:12.409786 6 log.go:172] (0xc0007e6f20) Data frame received for 3 I0508 14:01:12.409803 6 log.go:172] (0xc002908000) (3) Data frame handling I0508 14:01:12.411157 6 log.go:172] (0xc0007e6f20) Data frame received for 1 I0508 14:01:12.411205 6 log.go:172] (0xc0003b0dc0) (1) Data frame handling I0508 14:01:12.411254 6 log.go:172] (0xc0003b0dc0) (1) Data frame sent I0508 14:01:12.411280 6 log.go:172] (0xc0007e6f20) (0xc0003b0dc0) Stream removed, broadcasting: 1 I0508 14:01:12.411296 6 log.go:172] (0xc0007e6f20) Go away received I0508 14:01:12.411456 6 log.go:172] (0xc0007e6f20) (0xc0003b0dc0) Stream removed, broadcasting: 1 I0508 14:01:12.411485 6 log.go:172] (0xc0007e6f20) (0xc002908000) Stream removed, broadcasting: 3 I0508 14:01:12.411498 6 log.go:172] (0xc0007e6f20) (0xc002ce6000) Stream removed, broadcasting: 5 May 8 14:01:12.411: INFO: Exec stderr: "" May 8 14:01:12.411: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.411: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.437423 6 log.go:172] (0xc0006ab1e0) (0xc002ce6460) Create stream I0508 14:01:12.437453 6 log.go:172] (0xc0006ab1e0) (0xc002ce6460) Stream added, broadcasting: 1 I0508 14:01:12.438918 6 log.go:172] (0xc0006ab1e0) Reply frame received for 1 I0508 14:01:12.438949 6 log.go:172] (0xc0006ab1e0) (0xc002908140) Create stream I0508 14:01:12.438959 6 log.go:172] (0xc0006ab1e0) (0xc002908140) Stream added, broadcasting: 3 I0508 14:01:12.439604 6 log.go:172] (0xc0006ab1e0) Reply frame received for 3 I0508 14:01:12.439631 6 log.go:172] (0xc0006ab1e0) (0xc0029081e0) Create stream I0508 14:01:12.439641 6 log.go:172] (0xc0006ab1e0) (0xc0029081e0) Stream added, broadcasting: 5 I0508 14:01:12.440412 6 log.go:172] (0xc0006ab1e0) Reply frame received for 5 I0508 14:01:12.503420 6 log.go:172] (0xc0006ab1e0) Data frame received for 5 I0508 14:01:12.503449 6 log.go:172] (0xc0029081e0) (5) Data frame handling I0508 14:01:12.503519 6 log.go:172] (0xc0006ab1e0) Data frame received for 3 I0508 14:01:12.503570 6 log.go:172] (0xc002908140) (3) Data frame handling I0508 14:01:12.503604 6 log.go:172] (0xc002908140) (3) Data frame sent I0508 14:01:12.503626 6 log.go:172] (0xc0006ab1e0) Data frame received for 3 I0508 14:01:12.503645 6 log.go:172] (0xc002908140) (3) Data frame handling I0508 14:01:12.505548 6 log.go:172] (0xc0006ab1e0) Data frame received for 1 I0508 14:01:12.505570 6 log.go:172] (0xc002ce6460) (1) Data frame handling I0508 14:01:12.505595 6 log.go:172] (0xc002ce6460) (1) Data frame sent I0508 14:01:12.505701 6 log.go:172] (0xc0006ab1e0) (0xc002ce6460) Stream removed, broadcasting: 1 I0508 14:01:12.505804 6 log.go:172] (0xc0006ab1e0) (0xc002ce6460) Stream removed, broadcasting: 1 I0508 14:01:12.505828 6 log.go:172] (0xc0006ab1e0) (0xc002908140) Stream removed, broadcasting: 3 I0508 14:01:12.505908 6 log.go:172] (0xc0006ab1e0) Go away received I0508 14:01:12.506063 6 log.go:172] (0xc0006ab1e0) (0xc0029081e0) Stream removed, broadcasting: 5 May 8 14:01:12.506: INFO: Exec stderr: "" May 8 14:01:12.506: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6479 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:01:12.506: INFO: >>> kubeConfig: /root/.kube/config I0508 14:01:12.538255 6 log.go:172] (0xc0006abef0) (0xc002ce6780) Create stream I0508 14:01:12.538291 6 log.go:172] (0xc0006abef0) (0xc002ce6780) Stream added, broadcasting: 1 I0508 14:01:12.540355 6 log.go:172] (0xc0006abef0) Reply frame received for 1 I0508 14:01:12.540402 6 log.go:172] (0xc0006abef0) (0xc002908280) Create stream I0508 14:01:12.540427 6 log.go:172] (0xc0006abef0) (0xc002908280) Stream added, broadcasting: 3 I0508 14:01:12.541786 6 log.go:172] (0xc0006abef0) Reply frame received for 3 I0508 14:01:12.541825 6 log.go:172] (0xc0006abef0) (0xc00058a140) Create stream I0508 14:01:12.541850 6 log.go:172] (0xc0006abef0) (0xc00058a140) Stream added, broadcasting: 5 I0508 14:01:12.543082 6 log.go:172] (0xc0006abef0) Reply frame received for 5 I0508 14:01:12.617826 6 log.go:172] (0xc0006abef0) Data frame received for 5 I0508 14:01:12.617852 6 log.go:172] (0xc00058a140) (5) Data frame handling I0508 14:01:12.617907 6 log.go:172] (0xc0006abef0) Data frame received for 3 I0508 14:01:12.617973 6 log.go:172] (0xc002908280) (3) Data frame handling I0508 14:01:12.618015 6 log.go:172] (0xc002908280) (3) Data frame sent I0508 14:01:12.618038 6 log.go:172] (0xc0006abef0) Data frame received for 3 I0508 14:01:12.618054 6 log.go:172] (0xc002908280) (3) Data frame handling I0508 14:01:12.619383 6 log.go:172] (0xc0006abef0) Data frame received for 1 I0508 14:01:12.619410 6 log.go:172] (0xc002ce6780) (1) Data frame handling I0508 14:01:12.619429 6 log.go:172] (0xc002ce6780) (1) Data frame sent I0508 14:01:12.619444 6 log.go:172] (0xc0006abef0) (0xc002ce6780) Stream removed, broadcasting: 1 I0508 14:01:12.619460 6 log.go:172] (0xc0006abef0) Go away received I0508 14:01:12.619594 6 log.go:172] (0xc0006abef0) (0xc002ce6780) Stream removed, broadcasting: 1 I0508 14:01:12.619614 6 log.go:172] (0xc0006abef0) (0xc002908280) Stream removed, broadcasting: 3 I0508 14:01:12.619623 6 log.go:172] (0xc0006abef0) (0xc00058a140) Stream removed, broadcasting: 5 May 8 14:01:12.619: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:01:12.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6479" for this suite. May 8 14:02:02.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:02:02.706: INFO: namespace e2e-kubelet-etc-hosts-6479 deletion completed in 50.082307676s • [SLOW TEST:61.305 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:02:02.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 8 14:02:02.792: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:02:02.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4276" for this suite. May 8 14:02:08.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:02:08.970: INFO: namespace kubectl-4276 deletion completed in 6.089648611s • [SLOW TEST:6.263 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:02:08.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:02:09.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a" in namespace "projected-6776" to be "success or failure" May 8 14:02:09.086: INFO: Pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.583604ms May 8 14:02:11.090: INFO: Pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03310257s May 8 14:02:13.095: INFO: Pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a": Phase="Running", Reason="", readiness=true. Elapsed: 4.037674755s May 8 14:02:15.107: INFO: Pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049580688s STEP: Saw pod success May 8 14:02:15.107: INFO: Pod "downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a" satisfied condition "success or failure" May 8 14:02:15.110: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a container client-container: STEP: delete the pod May 8 14:02:15.134: INFO: Waiting for pod downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a to disappear May 8 14:02:15.153: INFO: Pod downwardapi-volume-cdf20233-e8b1-4a25-ac56-302b3100fd4a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:02:15.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6776" for this suite. May 8 14:02:21.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:02:21.250: INFO: namespace projected-6776 deletion completed in 6.094255045s • [SLOW TEST:12.279 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:02:21.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 14:02:21.384: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:21.411: INFO: Number of nodes with available pods: 0 May 8 14:02:21.411: INFO: Node iruya-worker is running more than one daemon pod May 8 14:02:22.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:22.420: INFO: Number of nodes with available pods: 0 May 8 14:02:22.420: INFO: Node iruya-worker is running more than one daemon pod May 8 14:02:23.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:23.420: INFO: Number of nodes with available pods: 0 May 8 14:02:23.420: INFO: Node iruya-worker is running more than one daemon pod May 8 14:02:24.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:24.435: INFO: Number of nodes with available pods: 0 May 8 14:02:24.435: INFO: Node iruya-worker is running more than one daemon pod May 8 14:02:25.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:25.419: INFO: Number of nodes with available pods: 1 May 8 14:02:25.419: INFO: Node iruya-worker is running more than one daemon pod May 8 14:02:26.415: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:26.418: INFO: Number of nodes with available pods: 2 May 8 14:02:26.418: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 8 14:02:26.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 14:02:26.441: INFO: Number of nodes with available pods: 2 May 8 14:02:26.441: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4969, will wait for the garbage collector to delete the pods May 8 14:02:27.582: INFO: Deleting DaemonSet.extensions daemon-set took: 48.630216ms May 8 14:02:27.882: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247645ms May 8 14:02:32.187: INFO: Number of nodes with available pods: 0 May 8 14:02:32.187: INFO: Number of running nodes: 0, number of available pods: 0 May 8 14:02:32.189: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4969/daemonsets","resourceVersion":"9720322"},"items":null} May 8 14:02:32.192: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4969/pods","resourceVersion":"9720322"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:02:32.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4969" for this suite. May 8 14:02:38.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:02:38.288: INFO: namespace daemonsets-4969 deletion completed in 6.084768584s • [SLOW TEST:17.038 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:02:38.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8423 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 14:02:38.346: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 8 14:03:06.533: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.178:8080/dial?request=hostName&protocol=http&host=10.244.1.69&port=8080&tries=1'] Namespace:pod-network-test-8423 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:03:06.533: INFO: >>> kubeConfig: /root/.kube/config I0508 14:03:06.563535 6 log.go:172] (0xc001b3a790) (0xc0030b9c20) Create stream I0508 14:03:06.563578 6 log.go:172] (0xc001b3a790) (0xc0030b9c20) Stream added, broadcasting: 1 I0508 14:03:06.565803 6 log.go:172] (0xc001b3a790) Reply frame received for 1 I0508 14:03:06.565839 6 log.go:172] (0xc001b3a790) (0xc0007fc5a0) Create stream I0508 14:03:06.565852 6 log.go:172] (0xc001b3a790) (0xc0007fc5a0) Stream added, broadcasting: 3 I0508 14:03:06.566902 6 log.go:172] (0xc001b3a790) Reply frame received for 3 I0508 14:03:06.566942 6 log.go:172] (0xc001b3a790) (0xc0030b9cc0) Create stream I0508 14:03:06.566974 6 log.go:172] (0xc001b3a790) (0xc0030b9cc0) Stream added, broadcasting: 5 I0508 14:03:06.567858 6 log.go:172] (0xc001b3a790) Reply frame received for 5 I0508 14:03:06.664280 6 log.go:172] (0xc001b3a790) Data frame received for 3 I0508 14:03:06.664309 6 log.go:172] (0xc0007fc5a0) (3) Data frame handling I0508 14:03:06.664330 6 log.go:172] (0xc0007fc5a0) (3) Data frame sent I0508 14:03:06.664845 6 log.go:172] (0xc001b3a790) Data frame received for 5 I0508 14:03:06.664874 6 log.go:172] (0xc0030b9cc0) (5) Data frame handling I0508 14:03:06.664896 6 log.go:172] (0xc001b3a790) Data frame received for 3 I0508 14:03:06.664906 6 log.go:172] (0xc0007fc5a0) (3) Data frame handling I0508 14:03:06.666633 6 log.go:172] (0xc001b3a790) Data frame received for 1 I0508 14:03:06.666669 6 log.go:172] (0xc0030b9c20) (1) Data frame handling I0508 14:03:06.666682 6 log.go:172] (0xc0030b9c20) (1) Data frame sent I0508 14:03:06.666696 6 log.go:172] (0xc001b3a790) (0xc0030b9c20) Stream removed, broadcasting: 1 I0508 14:03:06.666743 6 log.go:172] (0xc001b3a790) Go away received I0508 14:03:06.666841 6 log.go:172] (0xc001b3a790) (0xc0030b9c20) Stream removed, broadcasting: 1 I0508 14:03:06.666865 6 log.go:172] (0xc001b3a790) (0xc0007fc5a0) Stream removed, broadcasting: 3 I0508 14:03:06.666883 6 log.go:172] (0xc001b3a790) (0xc0030b9cc0) Stream removed, broadcasting: 5 May 8 14:03:06.666: INFO: Waiting for endpoints: map[] May 8 14:03:06.670: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.178:8080/dial?request=hostName&protocol=http&host=10.244.2.177&port=8080&tries=1'] Namespace:pod-network-test-8423 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 14:03:06.670: INFO: >>> kubeConfig: /root/.kube/config I0508 14:03:06.698287 6 log.go:172] (0xc0024e6d10) (0xc0007fd360) Create stream I0508 14:03:06.698328 6 log.go:172] (0xc0024e6d10) (0xc0007fd360) Stream added, broadcasting: 1 I0508 14:03:06.700017 6 log.go:172] (0xc0024e6d10) Reply frame received for 1 I0508 14:03:06.700058 6 log.go:172] (0xc0024e6d10) (0xc0007fd4a0) Create stream I0508 14:03:06.700077 6 log.go:172] (0xc0024e6d10) (0xc0007fd4a0) Stream added, broadcasting: 3 I0508 14:03:06.700826 6 log.go:172] (0xc0024e6d10) Reply frame received for 3 I0508 14:03:06.700868 6 log.go:172] (0xc0024e6d10) (0xc0007fd7c0) Create stream I0508 14:03:06.700881 6 log.go:172] (0xc0024e6d10) (0xc0007fd7c0) Stream added, broadcasting: 5 I0508 14:03:06.701745 6 log.go:172] (0xc0024e6d10) Reply frame received for 5 I0508 14:03:06.761840 6 log.go:172] (0xc0024e6d10) Data frame received for 3 I0508 14:03:06.761870 6 log.go:172] (0xc0007fd4a0) (3) Data frame handling I0508 14:03:06.761895 6 log.go:172] (0xc0007fd4a0) (3) Data frame sent I0508 14:03:06.762776 6 log.go:172] (0xc0024e6d10) Data frame received for 5 I0508 14:03:06.762809 6 log.go:172] (0xc0007fd7c0) (5) Data frame handling I0508 14:03:06.763121 6 log.go:172] (0xc0024e6d10) Data frame received for 3 I0508 14:03:06.763138 6 log.go:172] (0xc0007fd4a0) (3) Data frame handling I0508 14:03:06.763918 6 log.go:172] (0xc0024e6d10) Data frame received for 1 I0508 14:03:06.763933 6 log.go:172] (0xc0007fd360) (1) Data frame handling I0508 14:03:06.763952 6 log.go:172] (0xc0007fd360) (1) Data frame sent I0508 14:03:06.763970 6 log.go:172] (0xc0024e6d10) (0xc0007fd360) Stream removed, broadcasting: 1 I0508 14:03:06.764000 6 log.go:172] (0xc0024e6d10) Go away received I0508 14:03:06.764106 6 log.go:172] (0xc0024e6d10) (0xc0007fd360) Stream removed, broadcasting: 1 I0508 14:03:06.764125 6 log.go:172] (0xc0024e6d10) (0xc0007fd4a0) Stream removed, broadcasting: 3 I0508 14:03:06.764140 6 log.go:172] (0xc0024e6d10) (0xc0007fd7c0) Stream removed, broadcasting: 5 May 8 14:03:06.764: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:03:06.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8423" for this suite. May 8 14:03:30.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:03:30.900: INFO: namespace pod-network-test-8423 deletion completed in 24.132006762s • [SLOW TEST:52.611 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:03:30.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-37e6c85b-3dee-4429-b65d-20366c6f1804 STEP: Creating a pod to test consume configMaps May 8 14:03:31.020: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32" in namespace "projected-2997" to be "success or failure" May 8 14:03:31.032: INFO: Pod "pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.49099ms May 8 14:03:33.037: INFO: Pod "pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016868947s May 8 14:03:35.040: INFO: Pod "pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020741572s STEP: Saw pod success May 8 14:03:35.040: INFO: Pod "pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32" satisfied condition "success or failure" May 8 14:03:35.043: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32 container projected-configmap-volume-test: STEP: delete the pod May 8 14:03:35.096: INFO: Waiting for pod pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32 to disappear May 8 14:03:35.359: INFO: Pod pod-projected-configmaps-95e2e5d1-2339-4813-96f9-593cd268df32 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:03:35.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2997" for this suite. May 8 14:03:41.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:03:41.540: INFO: namespace projected-2997 deletion completed in 6.135797271s • [SLOW TEST:10.639 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:03:41.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 8 14:03:41.590: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:03:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8400" for this suite. May 8 14:04:11.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:04:12.055: INFO: namespace init-container-8400 deletion completed in 22.08381073s • [SLOW TEST:30.514 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:04:12.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 14:04:12.156: INFO: Waiting up to 5m0s for pod "pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc" in namespace "emptydir-4524" to be "success or failure" May 8 14:04:12.159: INFO: Pod "pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059577ms May 8 14:04:14.172: INFO: Pod "pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016646927s May 8 14:04:16.176: INFO: Pod "pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020669379s STEP: Saw pod success May 8 14:04:16.177: INFO: Pod "pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc" satisfied condition "success or failure" May 8 14:04:16.179: INFO: Trying to get logs from node iruya-worker pod pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc container test-container: STEP: delete the pod May 8 14:04:16.202: INFO: Waiting for pod pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc to disappear May 8 14:04:16.283: INFO: Pod pod-0afaa19d-398a-4cca-a0e3-e7c3fd738fdc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:04:16.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4524" for this suite. May 8 14:04:23.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:04:23.831: INFO: namespace emptydir-4524 deletion completed in 6.907524047s • [SLOW TEST:11.776 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:04:23.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 8 14:04:28.502: INFO: Successfully updated pod "annotationupdatece949531-c94e-45fc-a98e-5455d9597a5a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:04:32.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9531" for this suite. May 8 14:04:48.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:04:48.619: INFO: namespace downward-api-9531 deletion completed in 16.08481523s • [SLOW TEST:24.788 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:04:48.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 8 14:04:48.788: INFO: Waiting up to 5m0s for pod "downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc" in namespace "downward-api-8104" to be "success or failure" May 8 14:04:48.804: INFO: Pod "downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.229211ms May 8 14:04:50.808: INFO: Pod "downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020184957s May 8 14:04:52.813: INFO: Pod "downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024883425s STEP: Saw pod success May 8 14:04:52.813: INFO: Pod "downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc" satisfied condition "success or failure" May 8 14:04:52.816: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc container dapi-container: STEP: delete the pod May 8 14:04:52.994: INFO: Waiting for pod downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc to disappear May 8 14:04:53.127: INFO: Pod downward-api-3e494d87-dbb5-4715-ad51-db98c64f93bc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:04:53.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8104" for this suite. May 8 14:04:59.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:04:59.304: INFO: namespace downward-api-8104 deletion completed in 6.172202023s • [SLOW TEST:10.684 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:04:59.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cf379ec9-e3bd-449b-b814-388670336c67 STEP: Creating a pod to test consume secrets May 8 14:04:59.528: INFO: Waiting up to 5m0s for pod "pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2" in namespace "secrets-8325" to be "success or failure" May 8 14:04:59.537: INFO: Pod "pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.355599ms May 8 14:05:01.541: INFO: Pod "pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013603677s May 8 14:05:03.545: INFO: Pod "pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017331086s STEP: Saw pod success May 8 14:05:03.545: INFO: Pod "pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2" satisfied condition "success or failure" May 8 14:05:03.548: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2 container secret-volume-test: STEP: delete the pod May 8 14:05:03.593: INFO: Waiting for pod pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2 to disappear May 8 14:05:03.614: INFO: Pod pod-secrets-9e20abb3-31d0-42e6-af5b-ac289a65c6a2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:05:03.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8325" for this suite. May 8 14:05:09.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:05:09.777: INFO: namespace secrets-8325 deletion completed in 6.158852543s • [SLOW TEST:10.473 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:05:09.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:05:09.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c" in namespace "downward-api-9974" to be "success or failure" May 8 14:05:09.955: INFO: Pod "downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c": Phase="Pending", Reason="", readiness=false. Elapsed: 116.002404ms May 8 14:05:11.960: INFO: Pod "downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120207934s May 8 14:05:13.964: INFO: Pod "downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124835802s STEP: Saw pod success May 8 14:05:13.964: INFO: Pod "downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c" satisfied condition "success or failure" May 8 14:05:13.967: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c container client-container: STEP: delete the pod May 8 14:05:14.009: INFO: Waiting for pod downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c to disappear May 8 14:05:14.043: INFO: Pod downwardapi-volume-f0e7800f-3743-4d73-a194-fe67ba19b04c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:05:14.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9974" for this suite. May 8 14:05:20.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:05:20.175: INFO: namespace downward-api-9974 deletion completed in 6.128044964s • [SLOW TEST:10.397 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:05:20.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:05:20.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f" in namespace "downward-api-5743" to be "success or failure" May 8 14:05:20.260: INFO: Pod "downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.371148ms May 8 14:05:22.314: INFO: Pod "downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057153949s May 8 14:05:24.318: INFO: Pod "downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061543831s STEP: Saw pod success May 8 14:05:24.318: INFO: Pod "downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f" satisfied condition "success or failure" May 8 14:05:24.321: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f container client-container: STEP: delete the pod May 8 14:05:24.339: INFO: Waiting for pod downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f to disappear May 8 14:05:24.362: INFO: Pod downwardapi-volume-d2dc2f8a-acf7-46f9-b721-0c253c14b01f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:05:24.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5743" for this suite. May 8 14:05:30.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:05:30.476: INFO: namespace downward-api-5743 deletion completed in 6.111203353s • [SLOW TEST:10.301 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:05:30.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-nvb9 STEP: Creating a pod to test atomic-volume-subpath May 8 14:05:30.627: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nvb9" in namespace "subpath-5137" to be "success or failure" May 8 14:05:30.639: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.560617ms May 8 14:05:32.644: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016977212s May 8 14:05:34.650: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.022868093s May 8 14:05:36.654: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 6.027185224s May 8 14:05:38.659: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 8.031867238s May 8 14:05:40.663: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 10.036298259s May 8 14:05:42.667: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 12.040059853s May 8 14:05:44.671: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 14.0442014s May 8 14:05:46.675: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 16.048756092s May 8 14:05:48.680: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 18.053318649s May 8 14:05:50.684: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 20.05741703s May 8 14:05:52.688: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Running", Reason="", readiness=true. Elapsed: 22.06143021s May 8 14:05:54.693: INFO: Pod "pod-subpath-test-projected-nvb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065897635s STEP: Saw pod success May 8 14:05:54.693: INFO: Pod "pod-subpath-test-projected-nvb9" satisfied condition "success or failure" May 8 14:05:54.695: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-nvb9 container test-container-subpath-projected-nvb9: STEP: delete the pod May 8 14:05:54.730: INFO: Waiting for pod pod-subpath-test-projected-nvb9 to disappear May 8 14:05:54.735: INFO: Pod pod-subpath-test-projected-nvb9 no longer exists STEP: Deleting pod pod-subpath-test-projected-nvb9 May 8 14:05:54.735: INFO: Deleting pod "pod-subpath-test-projected-nvb9" in namespace "subpath-5137" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:05:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5137" for this suite. May 8 14:06:00.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:06:00.829: INFO: namespace subpath-5137 deletion completed in 6.088577332s • [SLOW TEST:30.353 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:06:00.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 14:06:04.968: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:06:05.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9755" for this suite. May 8 14:06:11.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:06:11.355: INFO: namespace container-runtime-9755 deletion completed in 6.197973092s • [SLOW TEST:10.526 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:06:11.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 8 14:06:11.414: INFO: Waiting up to 5m0s for pod "client-containers-2225435b-7458-4033-8495-d64c82660833" in namespace "containers-2719" to be "success or failure" May 8 14:06:11.451: INFO: Pod "client-containers-2225435b-7458-4033-8495-d64c82660833": Phase="Pending", Reason="", readiness=false. Elapsed: 36.561999ms May 8 14:06:13.456: INFO: Pod "client-containers-2225435b-7458-4033-8495-d64c82660833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041241929s May 8 14:06:15.459: INFO: Pod "client-containers-2225435b-7458-4033-8495-d64c82660833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044995607s STEP: Saw pod success May 8 14:06:15.459: INFO: Pod "client-containers-2225435b-7458-4033-8495-d64c82660833" satisfied condition "success or failure" May 8 14:06:15.462: INFO: Trying to get logs from node iruya-worker pod client-containers-2225435b-7458-4033-8495-d64c82660833 container test-container: STEP: delete the pod May 8 14:06:15.490: INFO: Waiting for pod client-containers-2225435b-7458-4033-8495-d64c82660833 to disappear May 8 14:06:15.559: INFO: Pod client-containers-2225435b-7458-4033-8495-d64c82660833 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:06:15.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2719" for this suite. May 8 14:06:21.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:06:21.702: INFO: namespace containers-2719 deletion completed in 6.138898065s • [SLOW TEST:10.346 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:06:21.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 14:06:21.803: INFO: Waiting up to 5m0s for pod "pod-2203bace-e61c-4d2d-9475-1b6b0f47d930" in namespace "emptydir-5018" to be "success or failure" May 8 14:06:21.807: INFO: Pod "pod-2203bace-e61c-4d2d-9475-1b6b0f47d930": Phase="Pending", Reason="", readiness=false. Elapsed: 3.887754ms May 8 14:06:23.819: INFO: Pod "pod-2203bace-e61c-4d2d-9475-1b6b0f47d930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015897523s May 8 14:06:25.823: INFO: Pod "pod-2203bace-e61c-4d2d-9475-1b6b0f47d930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019618044s STEP: Saw pod success May 8 14:06:25.823: INFO: Pod "pod-2203bace-e61c-4d2d-9475-1b6b0f47d930" satisfied condition "success or failure" May 8 14:06:25.826: INFO: Trying to get logs from node iruya-worker2 pod pod-2203bace-e61c-4d2d-9475-1b6b0f47d930 container test-container: STEP: delete the pod May 8 14:06:25.857: INFO: Waiting for pod pod-2203bace-e61c-4d2d-9475-1b6b0f47d930 to disappear May 8 14:06:25.885: INFO: Pod pod-2203bace-e61c-4d2d-9475-1b6b0f47d930 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:06:25.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5018" for this suite. May 8 14:06:31.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:06:31.986: INFO: namespace emptydir-5018 deletion completed in 6.097436883s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:06:31.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 14:06:32.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8932' May 8 14:06:34.748: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 14:06:34.748: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 8 14:06:36.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8932' May 8 14:06:36.890: INFO: stderr: "" May 8 14:06:36.890: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:06:36.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8932" for this suite. May 8 14:06:42.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:06:42.994: INFO: namespace kubectl-8932 deletion completed in 6.100173607s • [SLOW TEST:11.007 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:06:42.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a9a2c7a4-7581-48c0-b5c2-e0baed981ec1 in namespace container-probe-9669 May 8 14:06:47.105: INFO: Started pod busybox-a9a2c7a4-7581-48c0-b5c2-e0baed981ec1 in namespace container-probe-9669 STEP: checking the pod's current state and verifying that restartCount is present May 8 14:06:47.108: INFO: Initial restart count of pod busybox-a9a2c7a4-7581-48c0-b5c2-e0baed981ec1 is 0 May 8 14:07:33.216: INFO: Restart count of pod container-probe-9669/busybox-a9a2c7a4-7581-48c0-b5c2-e0baed981ec1 is now 1 (46.107634518s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:07:33.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9669" for this suite. May 8 14:07:39.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:07:39.322: INFO: namespace container-probe-9669 deletion completed in 6.076768974s • [SLOW TEST:56.328 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:07:39.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3280 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3280 to expose endpoints map[] May 8 14:07:39.530: INFO: Get endpoints failed (51.744469ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 8 14:07:40.534: INFO: successfully validated that service endpoint-test2 in namespace services-3280 exposes endpoints map[] (1.055645639s elapsed) STEP: Creating pod pod1 in namespace services-3280 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3280 to expose endpoints map[pod1:[80]] May 8 14:07:43.588: INFO: successfully validated that service endpoint-test2 in namespace services-3280 exposes endpoints map[pod1:[80]] (3.045982899s elapsed) STEP: Creating pod pod2 in namespace services-3280 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3280 to expose endpoints map[pod1:[80] pod2:[80]] May 8 14:07:49.226: INFO: successfully validated that service endpoint-test2 in namespace services-3280 exposes endpoints map[pod1:[80] pod2:[80]] (5.634114774s elapsed) STEP: Deleting pod pod1 in namespace services-3280 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3280 to expose endpoints map[pod2:[80]] May 8 14:07:50.275: INFO: successfully validated that service endpoint-test2 in namespace services-3280 exposes endpoints map[pod2:[80]] (1.045226347s elapsed) STEP: Deleting pod pod2 in namespace services-3280 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3280 to expose endpoints map[] May 8 14:07:51.298: INFO: successfully validated that service endpoint-test2 in namespace services-3280 exposes endpoints map[] (1.019664702s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:07:51.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3280" for this suite. May 8 14:08:13.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:08:13.574: INFO: namespace services-3280 deletion completed in 22.11820366s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:34.251 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:08:13.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-29e0a725-5850-4277-8c09-fa3366e34c9d in namespace container-probe-9935 May 8 14:08:19.671: INFO: Started pod liveness-29e0a725-5850-4277-8c09-fa3366e34c9d in namespace container-probe-9935 STEP: checking the pod's current state and verifying that restartCount is present May 8 14:08:19.675: INFO: Initial restart count of pod liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is 0 May 8 14:08:31.705: INFO: Restart count of pod container-probe-9935/liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is now 1 (12.030187609s elapsed) May 8 14:08:51.747: INFO: Restart count of pod container-probe-9935/liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is now 2 (32.071818557s elapsed) May 8 14:09:11.790: INFO: Restart count of pod container-probe-9935/liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is now 3 (52.11483974s elapsed) May 8 14:09:31.835: INFO: Restart count of pod container-probe-9935/liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is now 4 (1m12.160053634s elapsed) May 8 14:10:32.160: INFO: Restart count of pod container-probe-9935/liveness-29e0a725-5850-4277-8c09-fa3366e34c9d is now 5 (2m12.48535539s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:10:32.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9935" for this suite. May 8 14:10:38.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:10:38.318: INFO: namespace container-probe-9935 deletion completed in 6.096266494s • [SLOW TEST:144.743 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:10:38.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-75161557-ba92-4fb2-8d57-6bc2d2027b30 STEP: Creating a pod to test consume configMaps May 8 14:10:38.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998" in namespace "projected-4952" to be "success or failure" May 8 14:10:38.437: INFO: Pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998": Phase="Pending", Reason="", readiness=false. Elapsed: 43.422919ms May 8 14:10:40.441: INFO: Pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04743264s May 8 14:10:42.497: INFO: Pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998": Phase="Running", Reason="", readiness=true. Elapsed: 4.103362546s May 8 14:10:44.501: INFO: Pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107560552s STEP: Saw pod success May 8 14:10:44.501: INFO: Pod "pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998" satisfied condition "success or failure" May 8 14:10:44.504: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998 container projected-configmap-volume-test: STEP: delete the pod May 8 14:10:44.582: INFO: Waiting for pod pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998 to disappear May 8 14:10:44.590: INFO: Pod pod-projected-configmaps-75dd261c-1064-4f9b-ae63-4ed0e4468998 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:10:44.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4952" for this suite. May 8 14:10:50.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:10:50.685: INFO: namespace projected-4952 deletion completed in 6.09123573s • [SLOW TEST:12.367 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:10:50.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d43795d7-e8ed-432d-bf8a-df7cad345bd8 in namespace container-probe-8490 May 8 14:10:54.832: INFO: Started pod liveness-d43795d7-e8ed-432d-bf8a-df7cad345bd8 in namespace container-probe-8490 STEP: checking the pod's current state and verifying that restartCount is present May 8 14:10:54.835: INFO: Initial restart count of pod liveness-d43795d7-e8ed-432d-bf8a-df7cad345bd8 is 0 May 8 14:11:14.876: INFO: Restart count of pod container-probe-8490/liveness-d43795d7-e8ed-432d-bf8a-df7cad345bd8 is now 1 (20.041585894s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:11:14.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8490" for this suite. May 8 14:11:20.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:11:21.051: INFO: namespace container-probe-8490 deletion completed in 6.116383408s • [SLOW TEST:30.365 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:11:21.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-lmfp STEP: Creating a pod to test atomic-volume-subpath May 8 14:11:21.214: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lmfp" in namespace "subpath-1905" to be "success or failure" May 8 14:11:21.222: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.759016ms May 8 14:11:23.226: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011832235s May 8 14:11:25.229: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.015157506s May 8 14:11:27.233: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.019127068s May 8 14:11:29.238: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.023567817s May 8 14:11:31.241: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.026910756s May 8 14:11:33.245: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.031165232s May 8 14:11:35.250: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.0354627s May 8 14:11:37.254: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.039572792s May 8 14:11:39.257: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.042744053s May 8 14:11:41.261: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.046783256s May 8 14:11:43.266: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.051500253s May 8 14:11:45.270: INFO: Pod "pod-subpath-test-downwardapi-lmfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05574278s STEP: Saw pod success May 8 14:11:45.270: INFO: Pod "pod-subpath-test-downwardapi-lmfp" satisfied condition "success or failure" May 8 14:11:45.273: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-lmfp container test-container-subpath-downwardapi-lmfp: STEP: delete the pod May 8 14:11:45.297: INFO: Waiting for pod pod-subpath-test-downwardapi-lmfp to disappear May 8 14:11:45.304: INFO: Pod pod-subpath-test-downwardapi-lmfp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lmfp May 8 14:11:45.304: INFO: Deleting pod "pod-subpath-test-downwardapi-lmfp" in namespace "subpath-1905" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:11:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1905" for this suite. May 8 14:11:51.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:11:51.409: INFO: namespace subpath-1905 deletion completed in 6.101132364s • [SLOW TEST:30.358 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:11:51.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:11:51.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642" in namespace "projected-9092" to be "success or failure" May 8 14:11:51.508: INFO: Pod "downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642": Phase="Pending", Reason="", readiness=false. Elapsed: 5.681515ms May 8 14:11:53.540: INFO: Pod "downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038378121s May 8 14:11:55.552: INFO: Pod "downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049606803s STEP: Saw pod success May 8 14:11:55.552: INFO: Pod "downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642" satisfied condition "success or failure" May 8 14:11:55.554: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642 container client-container: STEP: delete the pod May 8 14:11:55.577: INFO: Waiting for pod downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642 to disappear May 8 14:11:55.592: INFO: Pod downwardapi-volume-8acab294-d5f9-48d7-8c17-58f295bf4642 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:11:55.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9092" for this suite. May 8 14:12:01.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:12:01.724: INFO: namespace projected-9092 deletion completed in 6.128456726s • [SLOW TEST:10.314 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:12:01.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6818, will wait for the garbage collector to delete the pods May 8 14:12:07.866: INFO: Deleting Job.batch foo took: 5.337451ms May 8 14:12:08.166: INFO: Terminating Job.batch foo pods took: 300.257099ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:12:41.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6818" for this suite. May 8 14:12:47.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:12:47.183: INFO: namespace job-6818 deletion completed in 6.110028669s • [SLOW TEST:45.459 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:12:47.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 8 14:12:47.267: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 8 14:12:48.090: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 8 14:12:50.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 14:12:52.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724543968, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 14:12:54.887: INFO: Waited 616.758831ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:12:55.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5331" for this suite. May 8 14:13:01.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:13:01.812: INFO: namespace aggregator-5331 deletion completed in 6.336914182s • [SLOW TEST:14.628 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:13:01.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:13:01.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8721" for this suite. May 8 14:13:07.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:13:08.035: INFO: namespace services-8721 deletion completed in 6.144659834s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.223 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:13:08.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7902 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7902 STEP: Creating statefulset with conflicting port in namespace statefulset-7902 STEP: Waiting until pod test-pod will start running in namespace statefulset-7902 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7902 May 8 14:13:12.252: INFO: Observed stateful pod in namespace: statefulset-7902, name: ss-0, uid: 1b638eeb-4667-47af-b52b-db6a7624224e, status phase: Pending. Waiting for statefulset controller to delete. May 8 14:13:12.271: INFO: Observed stateful pod in namespace: statefulset-7902, name: ss-0, uid: 1b638eeb-4667-47af-b52b-db6a7624224e, status phase: Pending. Waiting for statefulset controller to delete. May 8 14:13:22.150: INFO: Observed stateful pod in namespace: statefulset-7902, name: ss-0, uid: 1b638eeb-4667-47af-b52b-db6a7624224e, status phase: Failed. Waiting for statefulset controller to delete. May 8 14:13:22.201: INFO: Observed stateful pod in namespace: statefulset-7902, name: ss-0, uid: 1b638eeb-4667-47af-b52b-db6a7624224e, status phase: Failed. Waiting for statefulset controller to delete. May 8 14:13:22.218: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7902 STEP: Removing pod with conflicting port in namespace statefulset-7902 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7902 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 8 14:13:32.369: INFO: Deleting all statefulset in ns statefulset-7902 May 8 14:13:32.372: INFO: Scaling statefulset ss to 0 May 8 14:13:42.390: INFO: Waiting for statefulset status.replicas updated to 0 May 8 14:13:42.393: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:13:42.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7902" for this suite. May 8 14:13:48.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:13:48.542: INFO: namespace statefulset-7902 deletion completed in 6.121882982s • [SLOW TEST:40.507 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:13:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 8 14:13:52.659: INFO: Pod pod-hostip-aa3c042b-0574-4809-98e5-5e343b000ba3 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:13:52.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8192" for this suite. May 8 14:14:16.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:14:16.800: INFO: namespace pods-8192 deletion completed in 24.137369734s • [SLOW TEST:28.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:14:16.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 14:14:16.857: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 8 14:14:18.949: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:14:20.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4067" for this suite. May 8 14:14:26.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:14:26.417: INFO: namespace replication-controller-4067 deletion completed in 6.191450476s • [SLOW TEST:9.617 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:14:26.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:14:59.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-979" for this suite. May 8 14:15:05.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:15:05.301: INFO: namespace container-runtime-979 deletion completed in 6.269643841s • [SLOW TEST:38.884 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:15:05.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5749.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5749.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5749.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5749.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 191.71.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.71.191_udp@PTR;check="$$(dig +tcp +noall +answer +search 191.71.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.71.191_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5749.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5749.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5749.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5749.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5749.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5749.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 191.71.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.71.191_udp@PTR;check="$$(dig +tcp +noall +answer +search 191.71.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.71.191_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 14:15:11.510: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.514: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.516: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.534: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.536: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.539: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.542: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:11.557: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:16.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.568: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.592: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.598: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:16.616: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:21.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.591: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.594: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.617: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.621: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.624: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.627: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:21.646: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:26.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.571: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.598: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.608: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:26.628: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:31.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.815: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.819: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.822: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.877: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.883: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.886: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:31.903: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:36.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.597: INFO: Unable to read jessie_udp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local from pod dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447: the server could not find the requested resource (get pods dns-test-e43c7144-7018-42d9-ae73-019c1e982447) May 8 14:15:36.628: INFO: Lookups using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 failed for: [wheezy_udp@dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@dns-test-service.dns-5749.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_udp@dns-test-service.dns-5749.svc.cluster.local jessie_tcp@dns-test-service.dns-5749.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5749.svc.cluster.local] May 8 14:15:41.628: INFO: DNS probes using dns-5749/dns-test-e43c7144-7018-42d9-ae73-019c1e982447 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:15:42.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5749" for this suite. May 8 14:15:48.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:15:48.678: INFO: namespace dns-5749 deletion completed in 6.120752193s • [SLOW TEST:43.377 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:15:48.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-295/configmap-test-3f9d3a9e-02e6-446c-a1c1-24b933760d25 STEP: Creating a pod to test consume configMaps May 8 14:15:48.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1" in namespace "configmap-295" to be "success or failure" May 8 14:15:48.772: INFO: Pod "pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25045ms May 8 14:15:50.776: INFO: Pod "pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007164276s May 8 14:15:52.780: INFO: Pod "pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010974738s STEP: Saw pod success May 8 14:15:52.780: INFO: Pod "pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1" satisfied condition "success or failure" May 8 14:15:52.783: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1 container env-test: STEP: delete the pod May 8 14:15:52.810: INFO: Waiting for pod pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1 to disappear May 8 14:15:52.830: INFO: Pod pod-configmaps-84f6c066-1675-4667-ab95-1dfc71ba39b1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:15:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-295" for this suite. May 8 14:15:58.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:15:58.934: INFO: namespace configmap-295 deletion completed in 6.098048616s • [SLOW TEST:10.256 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:15:58.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-85562c78-4a4e-45c5-85cf-dfde904eb949 STEP: Creating a pod to test consume secrets May 8 14:15:59.018: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7" in namespace "projected-266" to be "success or failure" May 8 14:15:59.022: INFO: Pod "pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787972ms May 8 14:16:01.639: INFO: Pod "pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620668426s May 8 14:16:03.642: INFO: Pod "pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.623980211s STEP: Saw pod success May 8 14:16:03.642: INFO: Pod "pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7" satisfied condition "success or failure" May 8 14:16:03.646: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7 container projected-secret-volume-test: STEP: delete the pod May 8 14:16:03.677: INFO: Waiting for pod pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7 to disappear May 8 14:16:03.687: INFO: Pod pod-projected-secrets-3834eca6-2a93-4575-a77a-82cba14912f7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:16:03.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-266" for this suite. May 8 14:16:09.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:16:09.818: INFO: namespace projected-266 deletion completed in 6.128084602s • [SLOW TEST:10.884 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:16:09.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4a52dfec-3a31-47e6-a2e3-65afd888ad73 STEP: Creating a pod to test consume secrets May 8 14:16:09.929: INFO: Waiting up to 5m0s for pod "pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4" in namespace "secrets-2780" to be "success or failure" May 8 14:16:09.933: INFO: Pod "pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02917ms May 8 14:16:12.034: INFO: Pod "pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105138017s May 8 14:16:14.039: INFO: Pod "pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109682998s STEP: Saw pod success May 8 14:16:14.039: INFO: Pod "pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4" satisfied condition "success or failure" May 8 14:16:14.042: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4 container secret-volume-test: STEP: delete the pod May 8 14:16:14.101: INFO: Waiting for pod pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4 to disappear May 8 14:16:14.106: INFO: Pod pod-secrets-1443fe83-24f2-4a53-afbe-c587a2ac01a4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:16:14.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2780" for this suite. May 8 14:16:20.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:16:20.195: INFO: namespace secrets-2780 deletion completed in 6.084912795s • [SLOW TEST:10.376 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:16:20.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 14:16:24.564: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:16:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2977" for this suite. May 8 14:16:30.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:16:30.700: INFO: namespace container-runtime-2977 deletion completed in 6.11318754s • [SLOW TEST:10.505 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:16:30.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4913 I0508 14:16:30.804126 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4913, replica count: 1 I0508 14:16:31.854523 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 14:16:32.854712 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 14:16:33.854903 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 14:16:34.855052 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 14:16:34.985: INFO: Created: latency-svc-7jb2t May 8 14:16:34.995: INFO: Got endpoints: latency-svc-7jb2t [40.694692ms] May 8 14:16:35.070: INFO: Created: latency-svc-8mw4m May 8 14:16:35.099: INFO: Got endpoints: latency-svc-8mw4m [103.095671ms] May 8 14:16:35.100: INFO: Created: latency-svc-sdwbf May 8 14:16:35.114: INFO: Got endpoints: latency-svc-sdwbf [118.060368ms] May 8 14:16:35.243: INFO: Created: latency-svc-dr9hn May 8 14:16:35.270: INFO: Got endpoints: latency-svc-dr9hn [274.053084ms] May 8 14:16:35.322: INFO: Created: latency-svc-jjj5h May 8 14:16:35.336: INFO: Got endpoints: latency-svc-jjj5h [340.28174ms] May 8 14:16:35.380: INFO: Created: latency-svc-2p94t May 8 14:16:35.403: INFO: Got endpoints: latency-svc-2p94t [406.302373ms] May 8 14:16:35.447: INFO: Created: latency-svc-4ddtt May 8 14:16:35.465: INFO: Got endpoints: latency-svc-4ddtt [468.770931ms] May 8 14:16:35.507: INFO: Created: latency-svc-544s8 May 8 14:16:35.532: INFO: Got endpoints: latency-svc-544s8 [535.649679ms] May 8 14:16:35.586: INFO: Created: latency-svc-294dq May 8 14:16:35.588: INFO: Got endpoints: latency-svc-294dq [592.190295ms] May 8 14:16:35.627: INFO: Created: latency-svc-zrsws May 8 14:16:35.640: INFO: Got endpoints: latency-svc-zrsws [643.150604ms] May 8 14:16:35.669: INFO: Created: latency-svc-jjtws May 8 14:16:35.682: INFO: Got endpoints: latency-svc-jjtws [685.970484ms] May 8 14:16:35.732: INFO: Created: latency-svc-qkdxl May 8 14:16:35.738: INFO: Got endpoints: latency-svc-qkdxl [741.844426ms] May 8 14:16:35.767: INFO: Created: latency-svc-js4vf May 8 14:16:35.818: INFO: Got endpoints: latency-svc-js4vf [821.996759ms] May 8 14:16:35.903: INFO: Created: latency-svc-kpl5j May 8 14:16:35.956: INFO: Got endpoints: latency-svc-kpl5j [959.148624ms] May 8 14:16:35.998: INFO: Created: latency-svc-58hwf May 8 14:16:36.047: INFO: Got endpoints: latency-svc-58hwf [1.050049085s] May 8 14:16:36.071: INFO: Created: latency-svc-drj8w May 8 14:16:36.087: INFO: Got endpoints: latency-svc-drj8w [1.090723161s] May 8 14:16:36.112: INFO: Created: latency-svc-vwf96 May 8 14:16:36.122: INFO: Got endpoints: latency-svc-vwf96 [1.023251889s] May 8 14:16:36.196: INFO: Created: latency-svc-8lrnf May 8 14:16:36.199: INFO: Got endpoints: latency-svc-8lrnf [1.085210329s] May 8 14:16:36.234: INFO: Created: latency-svc-4tmjj May 8 14:16:36.249: INFO: Got endpoints: latency-svc-4tmjj [978.989438ms] May 8 14:16:36.274: INFO: Created: latency-svc-lm26g May 8 14:16:36.292: INFO: Got endpoints: latency-svc-lm26g [955.023534ms] May 8 14:16:36.340: INFO: Created: latency-svc-n5hnw May 8 14:16:36.351: INFO: Got endpoints: latency-svc-n5hnw [948.812974ms] May 8 14:16:36.376: INFO: Created: latency-svc-b97kl May 8 14:16:36.394: INFO: Got endpoints: latency-svc-b97kl [929.343937ms] May 8 14:16:36.508: INFO: Created: latency-svc-7kfbh May 8 14:16:36.550: INFO: Got endpoints: latency-svc-7kfbh [1.018065893s] May 8 14:16:36.592: INFO: Created: latency-svc-c2g7j May 8 14:16:36.647: INFO: Got endpoints: latency-svc-c2g7j [1.058966224s] May 8 14:16:36.701: INFO: Created: latency-svc-dj7tm May 8 14:16:36.731: INFO: Got endpoints: latency-svc-dj7tm [1.091354914s] May 8 14:16:36.819: INFO: Created: latency-svc-44zd9 May 8 14:16:36.822: INFO: Got endpoints: latency-svc-44zd9 [1.139424877s] May 8 14:16:36.873: INFO: Created: latency-svc-5kbvg May 8 14:16:36.888: INFO: Got endpoints: latency-svc-5kbvg [1.149248839s] May 8 14:16:36.909: INFO: Created: latency-svc-9q2t6 May 8 14:16:36.962: INFO: Got endpoints: latency-svc-9q2t6 [1.144010293s] May 8 14:16:36.988: INFO: Created: latency-svc-qm8vk May 8 14:16:36.997: INFO: Got endpoints: latency-svc-qm8vk [1.040889739s] May 8 14:16:37.023: INFO: Created: latency-svc-6q46q May 8 14:16:37.033: INFO: Got endpoints: latency-svc-6q46q [986.222888ms] May 8 14:16:37.059: INFO: Created: latency-svc-gpf9d May 8 14:16:37.106: INFO: Got endpoints: latency-svc-gpf9d [1.019739118s] May 8 14:16:37.113: INFO: Created: latency-svc-jxbz6 May 8 14:16:37.130: INFO: Got endpoints: latency-svc-jxbz6 [1.007692657s] May 8 14:16:37.163: INFO: Created: latency-svc-q7q5f May 8 14:16:37.178: INFO: Got endpoints: latency-svc-q7q5f [978.481986ms] May 8 14:16:37.268: INFO: Created: latency-svc-b48g7 May 8 14:16:37.304: INFO: Got endpoints: latency-svc-b48g7 [1.055054344s] May 8 14:16:37.323: INFO: Created: latency-svc-6zzxs May 8 14:16:37.340: INFO: Got endpoints: latency-svc-6zzxs [1.048889025s] May 8 14:16:37.366: INFO: Created: latency-svc-f6vjs May 8 14:16:37.429: INFO: Got endpoints: latency-svc-f6vjs [1.077840009s] May 8 14:16:37.443: INFO: Created: latency-svc-js978 May 8 14:16:37.461: INFO: Got endpoints: latency-svc-js978 [1.066821922s] May 8 14:16:37.488: INFO: Created: latency-svc-pds5k May 8 14:16:37.497: INFO: Got endpoints: latency-svc-pds5k [947.51543ms] May 8 14:16:37.573: INFO: Created: latency-svc-9jsvt May 8 14:16:37.587: INFO: Got endpoints: latency-svc-9jsvt [939.471146ms] May 8 14:16:37.623: INFO: Created: latency-svc-7qd4f May 8 14:16:37.659: INFO: Got endpoints: latency-svc-7qd4f [928.173424ms] May 8 14:16:37.729: INFO: Created: latency-svc-pkhqv May 8 14:16:37.739: INFO: Got endpoints: latency-svc-pkhqv [917.138662ms] May 8 14:16:37.767: INFO: Created: latency-svc-xrt2n May 8 14:16:37.781: INFO: Got endpoints: latency-svc-xrt2n [893.646738ms] May 8 14:16:37.809: INFO: Created: latency-svc-qcgmt May 8 14:16:37.817: INFO: Got endpoints: latency-svc-qcgmt [854.889432ms] May 8 14:16:37.873: INFO: Created: latency-svc-72pgd May 8 14:16:37.884: INFO: Got endpoints: latency-svc-72pgd [886.962005ms] May 8 14:16:37.911: INFO: Created: latency-svc-5xmf4 May 8 14:16:37.926: INFO: Got endpoints: latency-svc-5xmf4 [893.303686ms] May 8 14:16:37.954: INFO: Created: latency-svc-qxpkw May 8 14:16:37.969: INFO: Got endpoints: latency-svc-qxpkw [863.024009ms] May 8 14:16:38.023: INFO: Created: latency-svc-krjcx May 8 14:16:38.052: INFO: Got endpoints: latency-svc-krjcx [921.885067ms] May 8 14:16:38.085: INFO: Created: latency-svc-844wb May 8 14:16:38.103: INFO: Got endpoints: latency-svc-844wb [924.852243ms] May 8 14:16:38.154: INFO: Created: latency-svc-tpdjw May 8 14:16:38.157: INFO: Got endpoints: latency-svc-tpdjw [852.797934ms] May 8 14:16:38.187: INFO: Created: latency-svc-4rl2f May 8 14:16:38.213: INFO: Got endpoints: latency-svc-4rl2f [872.059002ms] May 8 14:16:38.286: INFO: Created: latency-svc-x8dfj May 8 14:16:38.320: INFO: Got endpoints: latency-svc-x8dfj [890.245486ms] May 8 14:16:38.320: INFO: Created: latency-svc-2ttf7 May 8 14:16:38.349: INFO: Got endpoints: latency-svc-2ttf7 [887.782002ms] May 8 14:16:38.448: INFO: Created: latency-svc-nl8dm May 8 14:16:38.450: INFO: Got endpoints: latency-svc-nl8dm [952.972582ms] May 8 14:16:38.535: INFO: Created: latency-svc-gjbsk May 8 14:16:38.579: INFO: Got endpoints: latency-svc-gjbsk [992.06093ms] May 8 14:16:38.595: INFO: Created: latency-svc-k89g4 May 8 14:16:38.631: INFO: Got endpoints: latency-svc-k89g4 [971.67458ms] May 8 14:16:38.667: INFO: Created: latency-svc-rn428 May 8 14:16:38.717: INFO: Got endpoints: latency-svc-rn428 [977.967848ms] May 8 14:16:38.732: INFO: Created: latency-svc-gxdnh May 8 14:16:38.746: INFO: Got endpoints: latency-svc-gxdnh [964.994894ms] May 8 14:16:38.781: INFO: Created: latency-svc-zlgcv May 8 14:16:38.802: INFO: Got endpoints: latency-svc-zlgcv [984.159905ms] May 8 14:16:38.879: INFO: Created: latency-svc-vnwsn May 8 14:16:38.885: INFO: Got endpoints: latency-svc-vnwsn [1.001260486s] May 8 14:16:38.925: INFO: Created: latency-svc-qmr4m May 8 14:16:38.940: INFO: Got endpoints: latency-svc-qmr4m [1.013561787s] May 8 14:16:38.967: INFO: Created: latency-svc-cjfmx May 8 14:16:38.976: INFO: Got endpoints: latency-svc-cjfmx [1.006717649s] May 8 14:16:39.047: INFO: Created: latency-svc-jxvr8 May 8 14:16:39.050: INFO: Got endpoints: latency-svc-jxvr8 [997.541877ms] May 8 14:16:39.105: INFO: Created: latency-svc-nxjd9 May 8 14:16:39.121: INFO: Got endpoints: latency-svc-nxjd9 [1.018056456s] May 8 14:16:39.142: INFO: Created: latency-svc-bmkrb May 8 14:16:39.175: INFO: Got endpoints: latency-svc-bmkrb [1.018131351s] May 8 14:16:39.237: INFO: Created: latency-svc-2mcsl May 8 14:16:39.260: INFO: Got endpoints: latency-svc-2mcsl [1.046964913s] May 8 14:16:39.340: INFO: Created: latency-svc-4258l May 8 14:16:39.344: INFO: Got endpoints: latency-svc-4258l [1.024165206s] May 8 14:16:39.380: INFO: Created: latency-svc-bjv6c May 8 14:16:39.399: INFO: Got endpoints: latency-svc-bjv6c [1.05013859s] May 8 14:16:39.428: INFO: Created: latency-svc-jwwtf May 8 14:16:39.477: INFO: Got endpoints: latency-svc-jwwtf [1.026945863s] May 8 14:16:39.494: INFO: Created: latency-svc-gwcf4 May 8 14:16:39.513: INFO: Got endpoints: latency-svc-gwcf4 [934.179706ms] May 8 14:16:39.543: INFO: Created: latency-svc-pxqfx May 8 14:16:39.562: INFO: Got endpoints: latency-svc-pxqfx [931.218184ms] May 8 14:16:39.627: INFO: Created: latency-svc-7ndf4 May 8 14:16:39.630: INFO: Got endpoints: latency-svc-7ndf4 [913.383439ms] May 8 14:16:39.663: INFO: Created: latency-svc-p474k May 8 14:16:39.676: INFO: Got endpoints: latency-svc-p474k [929.913691ms] May 8 14:16:39.705: INFO: Created: latency-svc-bz42c May 8 14:16:39.713: INFO: Got endpoints: latency-svc-bz42c [911.320644ms] May 8 14:16:39.760: INFO: Created: latency-svc-cph5r May 8 14:16:39.767: INFO: Got endpoints: latency-svc-cph5r [882.062913ms] May 8 14:16:39.795: INFO: Created: latency-svc-fx7v4 May 8 14:16:39.810: INFO: Got endpoints: latency-svc-fx7v4 [869.997745ms] May 8 14:16:39.830: INFO: Created: latency-svc-4jzkr May 8 14:16:39.847: INFO: Got endpoints: latency-svc-4jzkr [870.475048ms] May 8 14:16:39.895: INFO: Created: latency-svc-rgb25 May 8 14:16:39.898: INFO: Got endpoints: latency-svc-rgb25 [848.469891ms] May 8 14:16:39.926: INFO: Created: latency-svc-z86xh May 8 14:16:39.938: INFO: Got endpoints: latency-svc-z86xh [816.900154ms] May 8 14:16:39.975: INFO: Created: latency-svc-dp77k May 8 14:16:40.023: INFO: Got endpoints: latency-svc-dp77k [847.099167ms] May 8 14:16:40.046: INFO: Created: latency-svc-smrb2 May 8 14:16:40.064: INFO: Got endpoints: latency-svc-smrb2 [804.5243ms] May 8 14:16:40.088: INFO: Created: latency-svc-fz5g5 May 8 14:16:40.107: INFO: Got endpoints: latency-svc-fz5g5 [762.589549ms] May 8 14:16:40.149: INFO: Created: latency-svc-tg6k7 May 8 14:16:40.166: INFO: Got endpoints: latency-svc-tg6k7 [766.517967ms] May 8 14:16:40.202: INFO: Created: latency-svc-8jdzc May 8 14:16:40.215: INFO: Got endpoints: latency-svc-8jdzc [737.811118ms] May 8 14:16:40.244: INFO: Created: latency-svc-qrmvl May 8 14:16:40.280: INFO: Got endpoints: latency-svc-qrmvl [766.542007ms] May 8 14:16:40.298: INFO: Created: latency-svc-bfz9l May 8 14:16:40.312: INFO: Got endpoints: latency-svc-bfz9l [749.801966ms] May 8 14:16:40.372: INFO: Created: latency-svc-q55vw May 8 14:16:40.417: INFO: Got endpoints: latency-svc-q55vw [787.022001ms] May 8 14:16:40.425: INFO: Created: latency-svc-s4dsv May 8 14:16:40.427: INFO: Got endpoints: latency-svc-s4dsv [750.283534ms] May 8 14:16:40.460: INFO: Created: latency-svc-pm4qt May 8 14:16:40.482: INFO: Got endpoints: latency-svc-pm4qt [769.236236ms] May 8 14:16:40.562: INFO: Created: latency-svc-kwcnr May 8 14:16:40.574: INFO: Got endpoints: latency-svc-kwcnr [806.536044ms] May 8 14:16:40.616: INFO: Created: latency-svc-g4dzj May 8 14:16:40.639: INFO: Got endpoints: latency-svc-g4dzj [828.92627ms] May 8 14:16:40.687: INFO: Created: latency-svc-m6fsc May 8 14:16:40.701: INFO: Got endpoints: latency-svc-m6fsc [854.372152ms] May 8 14:16:40.749: INFO: Created: latency-svc-jnwcd May 8 14:16:40.784: INFO: Got endpoints: latency-svc-jnwcd [885.433973ms] May 8 14:16:40.849: INFO: Created: latency-svc-r5hvh May 8 14:16:40.852: INFO: Got endpoints: latency-svc-r5hvh [914.656862ms] May 8 14:16:40.892: INFO: Created: latency-svc-4gzsl May 8 14:16:40.910: INFO: Got endpoints: latency-svc-4gzsl [887.493376ms] May 8 14:16:40.993: INFO: Created: latency-svc-m6ddc May 8 14:16:41.000: INFO: Got endpoints: latency-svc-m6ddc [935.474117ms] May 8 14:16:41.030: INFO: Created: latency-svc-zkbn2 May 8 14:16:41.066: INFO: Got endpoints: latency-svc-zkbn2 [959.312113ms] May 8 14:16:41.132: INFO: Created: latency-svc-ggp88 May 8 14:16:41.133: INFO: Got endpoints: latency-svc-ggp88 [967.356114ms] May 8 14:16:41.168: INFO: Created: latency-svc-fq265 May 8 14:16:41.181: INFO: Got endpoints: latency-svc-fq265 [966.103828ms] May 8 14:16:41.222: INFO: Created: latency-svc-cr6jh May 8 14:16:41.304: INFO: Got endpoints: latency-svc-cr6jh [1.02382039s] May 8 14:16:41.324: INFO: Created: latency-svc-ztp4j May 8 14:16:41.372: INFO: Got endpoints: latency-svc-ztp4j [1.059638625s] May 8 14:16:41.458: INFO: Created: latency-svc-pktcm May 8 14:16:41.458: INFO: Got endpoints: latency-svc-pktcm [1.040821225s] May 8 14:16:41.492: INFO: Created: latency-svc-fbw7g May 8 14:16:41.507: INFO: Got endpoints: latency-svc-fbw7g [1.080247568s] May 8 14:16:41.546: INFO: Created: latency-svc-bt4hf May 8 14:16:41.550: INFO: Got endpoints: latency-svc-bt4hf [1.067871794s] May 8 14:16:41.621: INFO: Created: latency-svc-bkcbj May 8 14:16:41.626: INFO: Got endpoints: latency-svc-bkcbj [1.05165779s] May 8 14:16:41.654: INFO: Created: latency-svc-jnhdc May 8 14:16:41.670: INFO: Got endpoints: latency-svc-jnhdc [1.031157779s] May 8 14:16:41.697: INFO: Created: latency-svc-xxjft May 8 14:16:41.713: INFO: Got endpoints: latency-svc-xxjft [1.011541906s] May 8 14:16:41.772: INFO: Created: latency-svc-pcsgq May 8 14:16:41.775: INFO: Got endpoints: latency-svc-pcsgq [991.154464ms] May 8 14:16:41.840: INFO: Created: latency-svc-vsw2q May 8 14:16:41.858: INFO: Got endpoints: latency-svc-vsw2q [1.005719679s] May 8 14:16:41.909: INFO: Created: latency-svc-kfr9z May 8 14:16:41.912: INFO: Got endpoints: latency-svc-kfr9z [1.00160437s] May 8 14:16:41.954: INFO: Created: latency-svc-2prct May 8 14:16:41.973: INFO: Got endpoints: latency-svc-2prct [973.102785ms] May 8 14:16:42.008: INFO: Created: latency-svc-6czpd May 8 14:16:42.046: INFO: Got endpoints: latency-svc-6czpd [980.267613ms] May 8 14:16:42.074: INFO: Created: latency-svc-8zdb6 May 8 14:16:42.088: INFO: Got endpoints: latency-svc-8zdb6 [954.23765ms] May 8 14:16:42.118: INFO: Created: latency-svc-d8gxx May 8 14:16:42.130: INFO: Got endpoints: latency-svc-d8gxx [948.71173ms] May 8 14:16:42.190: INFO: Created: latency-svc-lzmj5 May 8 14:16:42.194: INFO: Got endpoints: latency-svc-lzmj5 [890.421967ms] May 8 14:16:42.230: INFO: Created: latency-svc-mwpt9 May 8 14:16:42.247: INFO: Got endpoints: latency-svc-mwpt9 [875.556783ms] May 8 14:16:42.277: INFO: Created: latency-svc-sgrvj May 8 14:16:42.328: INFO: Got endpoints: latency-svc-sgrvj [869.252301ms] May 8 14:16:42.343: INFO: Created: latency-svc-6t74w May 8 14:16:42.360: INFO: Got endpoints: latency-svc-6t74w [852.629334ms] May 8 14:16:42.380: INFO: Created: latency-svc-nrp96 May 8 14:16:42.396: INFO: Got endpoints: latency-svc-nrp96 [845.713487ms] May 8 14:16:42.422: INFO: Created: latency-svc-6wfh9 May 8 14:16:42.471: INFO: Got endpoints: latency-svc-6wfh9 [845.591417ms] May 8 14:16:42.506: INFO: Created: latency-svc-h99gb May 8 14:16:42.517: INFO: Got endpoints: latency-svc-h99gb [846.750429ms] May 8 14:16:42.571: INFO: Created: latency-svc-qq5xr May 8 14:16:42.621: INFO: Got endpoints: latency-svc-qq5xr [908.452684ms] May 8 14:16:42.637: INFO: Created: latency-svc-9c26l May 8 14:16:42.655: INFO: Got endpoints: latency-svc-9c26l [880.521572ms] May 8 14:16:42.679: INFO: Created: latency-svc-cpkt7 May 8 14:16:42.698: INFO: Got endpoints: latency-svc-cpkt7 [839.658791ms] May 8 14:16:42.771: INFO: Created: latency-svc-2jsft May 8 14:16:42.774: INFO: Got endpoints: latency-svc-2jsft [862.441793ms] May 8 14:16:42.805: INFO: Created: latency-svc-s8wct May 8 14:16:42.835: INFO: Got endpoints: latency-svc-s8wct [861.812384ms] May 8 14:16:42.865: INFO: Created: latency-svc-ncc72 May 8 14:16:42.903: INFO: Got endpoints: latency-svc-ncc72 [856.712577ms] May 8 14:16:42.937: INFO: Created: latency-svc-5brwk May 8 14:16:42.963: INFO: Got endpoints: latency-svc-5brwk [875.511371ms] May 8 14:16:43.065: INFO: Created: latency-svc-zkx4d May 8 14:16:43.077: INFO: Got endpoints: latency-svc-zkx4d [947.164293ms] May 8 14:16:43.099: INFO: Created: latency-svc-lb7rq May 8 14:16:43.114: INFO: Got endpoints: latency-svc-lb7rq [919.784365ms] May 8 14:16:43.159: INFO: Created: latency-svc-52zkl May 8 14:16:43.202: INFO: Got endpoints: latency-svc-52zkl [955.061717ms] May 8 14:16:43.225: INFO: Created: latency-svc-8z8b7 May 8 14:16:43.235: INFO: Got endpoints: latency-svc-8z8b7 [906.803055ms] May 8 14:16:43.261: INFO: Created: latency-svc-wvzh5 May 8 14:16:43.265: INFO: Got endpoints: latency-svc-wvzh5 [905.353376ms] May 8 14:16:43.388: INFO: Created: latency-svc-98nwl May 8 14:16:43.404: INFO: Got endpoints: latency-svc-98nwl [1.007868802s] May 8 14:16:43.441: INFO: Created: latency-svc-snhpq May 8 14:16:43.507: INFO: Got endpoints: latency-svc-snhpq [1.036098379s] May 8 14:16:43.532: INFO: Created: latency-svc-gwshn May 8 14:16:43.548: INFO: Got endpoints: latency-svc-gwshn [1.031523871s] May 8 14:16:43.591: INFO: Created: latency-svc-rxs8v May 8 14:16:43.677: INFO: Got endpoints: latency-svc-rxs8v [1.055778854s] May 8 14:16:43.705: INFO: Created: latency-svc-7zwlk May 8 14:16:43.723: INFO: Got endpoints: latency-svc-7zwlk [1.06717326s] May 8 14:16:43.808: INFO: Created: latency-svc-nt4rn May 8 14:16:43.837: INFO: Got endpoints: latency-svc-nt4rn [1.139170594s] May 8 14:16:43.867: INFO: Created: latency-svc-glrdt May 8 14:16:43.879: INFO: Got endpoints: latency-svc-glrdt [1.104740004s] May 8 14:16:43.951: INFO: Created: latency-svc-45x8w May 8 14:16:43.970: INFO: Got endpoints: latency-svc-45x8w [1.134886313s] May 8 14:16:44.011: INFO: Created: latency-svc-rs9fw May 8 14:16:44.036: INFO: Got endpoints: latency-svc-rs9fw [1.133022277s] May 8 14:16:44.107: INFO: Created: latency-svc-nlj45 May 8 14:16:44.111: INFO: Got endpoints: latency-svc-nlj45 [1.147213399s] May 8 14:16:44.142: INFO: Created: latency-svc-4l4vb May 8 14:16:44.151: INFO: Got endpoints: latency-svc-4l4vb [1.073435053s] May 8 14:16:44.172: INFO: Created: latency-svc-dgvzm May 8 14:16:44.181: INFO: Got endpoints: latency-svc-dgvzm [1.067204481s] May 8 14:16:44.202: INFO: Created: latency-svc-vrww4 May 8 14:16:44.256: INFO: Got endpoints: latency-svc-vrww4 [1.053549666s] May 8 14:16:44.268: INFO: Created: latency-svc-4v82b May 8 14:16:44.285: INFO: Got endpoints: latency-svc-4v82b [1.050237063s] May 8 14:16:44.304: INFO: Created: latency-svc-tzk58 May 8 14:16:44.322: INFO: Got endpoints: latency-svc-tzk58 [1.056670166s] May 8 14:16:44.342: INFO: Created: latency-svc-87xp9 May 8 14:16:44.394: INFO: Got endpoints: latency-svc-87xp9 [990.050746ms] May 8 14:16:44.424: INFO: Created: latency-svc-dtf9d May 8 14:16:44.453: INFO: Got endpoints: latency-svc-dtf9d [945.936006ms] May 8 14:16:44.484: INFO: Created: latency-svc-g28tb May 8 14:16:44.538: INFO: Got endpoints: latency-svc-g28tb [989.288301ms] May 8 14:16:44.569: INFO: Created: latency-svc-7gkjs May 8 14:16:44.598: INFO: Got endpoints: latency-svc-7gkjs [921.032566ms] May 8 14:16:44.634: INFO: Created: latency-svc-wsbxz May 8 14:16:44.681: INFO: Got endpoints: latency-svc-wsbxz [958.56054ms] May 8 14:16:44.688: INFO: Created: latency-svc-bn9r5 May 8 14:16:44.707: INFO: Got endpoints: latency-svc-bn9r5 [869.911724ms] May 8 14:16:44.737: INFO: Created: latency-svc-fqjtw May 8 14:16:44.775: INFO: Got endpoints: latency-svc-fqjtw [895.850623ms] May 8 14:16:44.874: INFO: Created: latency-svc-djvbt May 8 14:16:44.912: INFO: Got endpoints: latency-svc-djvbt [942.093375ms] May 8 14:16:44.935: INFO: Created: latency-svc-5x47v May 8 14:16:44.986: INFO: Got endpoints: latency-svc-5x47v [950.081171ms] May 8 14:16:45.012: INFO: Created: latency-svc-jdbqb May 8 14:16:45.038: INFO: Got endpoints: latency-svc-jdbqb [927.540158ms] May 8 14:16:45.066: INFO: Created: latency-svc-h65cd May 8 14:16:45.081: INFO: Got endpoints: latency-svc-h65cd [929.561957ms] May 8 14:16:45.137: INFO: Created: latency-svc-mhbpx May 8 14:16:45.147: INFO: Got endpoints: latency-svc-mhbpx [965.90037ms] May 8 14:16:45.175: INFO: Created: latency-svc-qblwm May 8 14:16:45.190: INFO: Got endpoints: latency-svc-qblwm [933.855367ms] May 8 14:16:45.304: INFO: Created: latency-svc-2xjbn May 8 14:16:45.334: INFO: Got endpoints: latency-svc-2xjbn [1.048903105s] May 8 14:16:45.402: INFO: Created: latency-svc-gvxxs May 8 14:16:45.473: INFO: Got endpoints: latency-svc-gvxxs [1.150783115s] May 8 14:16:45.504: INFO: Created: latency-svc-dwwq7 May 8 14:16:45.520: INFO: Got endpoints: latency-svc-dwwq7 [1.125987651s] May 8 14:16:45.552: INFO: Created: latency-svc-bbsrc May 8 14:16:45.571: INFO: Got endpoints: latency-svc-bbsrc [1.117695856s] May 8 14:16:45.627: INFO: Created: latency-svc-nwk7k May 8 14:16:45.634: INFO: Got endpoints: latency-svc-nwk7k [1.096670776s] May 8 14:16:45.666: INFO: Created: latency-svc-kphn9 May 8 14:16:45.702: INFO: Got endpoints: latency-svc-kphn9 [1.103840061s] May 8 14:16:45.777: INFO: Created: latency-svc-vvlsf May 8 14:16:45.793: INFO: Got endpoints: latency-svc-vvlsf [1.111304024s] May 8 14:16:45.846: INFO: Created: latency-svc-7j7zc May 8 14:16:45.859: INFO: Got endpoints: latency-svc-7j7zc [1.151493097s] May 8 14:16:45.915: INFO: Created: latency-svc-zjzpf May 8 14:16:45.919: INFO: Got endpoints: latency-svc-zjzpf [1.14389415s] May 8 14:16:45.955: INFO: Created: latency-svc-kz7pp May 8 14:16:45.990: INFO: Got endpoints: latency-svc-kz7pp [1.077682518s] May 8 14:16:46.059: INFO: Created: latency-svc-fnxg2 May 8 14:16:46.061: INFO: Got endpoints: latency-svc-fnxg2 [1.074788862s] May 8 14:16:46.092: INFO: Created: latency-svc-6hps9 May 8 14:16:46.101: INFO: Got endpoints: latency-svc-6hps9 [1.06249107s] May 8 14:16:46.128: INFO: Created: latency-svc-kn8rs May 8 14:16:46.143: INFO: Got endpoints: latency-svc-kn8rs [1.06240497s] May 8 14:16:46.191: INFO: Created: latency-svc-d4l2t May 8 14:16:46.206: INFO: Got endpoints: latency-svc-d4l2t [1.058674148s] May 8 14:16:46.243: INFO: Created: latency-svc-mb8xx May 8 14:16:46.258: INFO: Got endpoints: latency-svc-mb8xx [1.068188471s] May 8 14:16:46.284: INFO: Created: latency-svc-c9lbc May 8 14:16:46.322: INFO: Got endpoints: latency-svc-c9lbc [988.5021ms] May 8 14:16:46.338: INFO: Created: latency-svc-6bqsn May 8 14:16:46.349: INFO: Got endpoints: latency-svc-6bqsn [875.693966ms] May 8 14:16:46.374: INFO: Created: latency-svc-8qpnp May 8 14:16:46.391: INFO: Got endpoints: latency-svc-8qpnp [871.125656ms] May 8 14:16:46.416: INFO: Created: latency-svc-k8x97 May 8 14:16:46.465: INFO: Got endpoints: latency-svc-k8x97 [894.2147ms] May 8 14:16:46.500: INFO: Created: latency-svc-94dcz May 8 14:16:46.518: INFO: Got endpoints: latency-svc-94dcz [883.741423ms] May 8 14:16:46.548: INFO: Created: latency-svc-9h2wl May 8 14:16:46.561: INFO: Got endpoints: latency-svc-9h2wl [858.448658ms] May 8 14:16:46.622: INFO: Created: latency-svc-rxvkc May 8 14:16:46.633: INFO: Got endpoints: latency-svc-rxvkc [840.291747ms] May 8 14:16:46.701: INFO: Created: latency-svc-ksszh May 8 14:16:46.711: INFO: Got endpoints: latency-svc-ksszh [852.439362ms] May 8 14:16:46.771: INFO: Created: latency-svc-hl7w5 May 8 14:16:46.787: INFO: Got endpoints: latency-svc-hl7w5 [868.139745ms] May 8 14:16:46.830: INFO: Created: latency-svc-g9hj9 May 8 14:16:46.844: INFO: Got endpoints: latency-svc-g9hj9 [854.261486ms] May 8 14:16:46.909: INFO: Created: latency-svc-6zdf2 May 8 14:16:46.913: INFO: Got endpoints: latency-svc-6zdf2 [852.175194ms] May 8 14:16:46.950: INFO: Created: latency-svc-8q7zr May 8 14:16:46.965: INFO: Got endpoints: latency-svc-8q7zr [864.362598ms] May 8 14:16:46.998: INFO: Created: latency-svc-cv2kh May 8 14:16:47.082: INFO: Got endpoints: latency-svc-cv2kh [939.339397ms] May 8 14:16:47.106: INFO: Created: latency-svc-667mg May 8 14:16:47.122: INFO: Got endpoints: latency-svc-667mg [915.637208ms] May 8 14:16:47.148: INFO: Created: latency-svc-7ls72 May 8 14:16:47.164: INFO: Got endpoints: latency-svc-7ls72 [905.864552ms] May 8 14:16:47.215: INFO: Created: latency-svc-6fmmb May 8 14:16:47.249: INFO: Got endpoints: latency-svc-6fmmb [926.887291ms] May 8 14:16:47.249: INFO: Created: latency-svc-cqgwz May 8 14:16:47.279: INFO: Got endpoints: latency-svc-cqgwz [930.413976ms] May 8 14:16:47.382: INFO: Created: latency-svc-hwqcv May 8 14:16:47.384: INFO: Got endpoints: latency-svc-hwqcv [992.935238ms] May 8 14:16:47.453: INFO: Created: latency-svc-kww4z May 8 14:16:47.478: INFO: Got endpoints: latency-svc-kww4z [1.012163081s] May 8 14:16:47.550: INFO: Created: latency-svc-4chtt May 8 14:16:47.574: INFO: Got endpoints: latency-svc-4chtt [1.055793071s] May 8 14:16:47.664: INFO: Created: latency-svc-brxs2 May 8 14:16:47.687: INFO: Got endpoints: latency-svc-brxs2 [1.126198225s] May 8 14:16:47.729: INFO: Created: latency-svc-8624n May 8 14:16:47.755: INFO: Got endpoints: latency-svc-8624n [1.121620011s] May 8 14:16:47.808: INFO: Created: latency-svc-vfvwf May 8 14:16:47.827: INFO: Got endpoints: latency-svc-vfvwf [1.115822368s] May 8 14:16:47.855: INFO: Created: latency-svc-5b6kv May 8 14:16:47.864: INFO: Got endpoints: latency-svc-5b6kv [1.076267439s] May 8 14:16:47.927: INFO: Created: latency-svc-5fzbt May 8 14:16:47.939: INFO: Got endpoints: latency-svc-5fzbt [1.094475156s] May 8 14:16:47.975: INFO: Created: latency-svc-pvrsj May 8 14:16:47.990: INFO: Got endpoints: latency-svc-pvrsj [1.076700844s] May 8 14:16:47.990: INFO: Latencies: [103.095671ms 118.060368ms 274.053084ms 340.28174ms 406.302373ms 468.770931ms 535.649679ms 592.190295ms 643.150604ms 685.970484ms 737.811118ms 741.844426ms 749.801966ms 750.283534ms 762.589549ms 766.517967ms 766.542007ms 769.236236ms 787.022001ms 804.5243ms 806.536044ms 816.900154ms 821.996759ms 828.92627ms 839.658791ms 840.291747ms 845.591417ms 845.713487ms 846.750429ms 847.099167ms 848.469891ms 852.175194ms 852.439362ms 852.629334ms 852.797934ms 854.261486ms 854.372152ms 854.889432ms 856.712577ms 858.448658ms 861.812384ms 862.441793ms 863.024009ms 864.362598ms 868.139745ms 869.252301ms 869.911724ms 869.997745ms 870.475048ms 871.125656ms 872.059002ms 875.511371ms 875.556783ms 875.693966ms 880.521572ms 882.062913ms 883.741423ms 885.433973ms 886.962005ms 887.493376ms 887.782002ms 890.245486ms 890.421967ms 893.303686ms 893.646738ms 894.2147ms 895.850623ms 905.353376ms 905.864552ms 906.803055ms 908.452684ms 911.320644ms 913.383439ms 914.656862ms 915.637208ms 917.138662ms 919.784365ms 921.032566ms 921.885067ms 924.852243ms 926.887291ms 927.540158ms 928.173424ms 929.343937ms 929.561957ms 929.913691ms 930.413976ms 931.218184ms 933.855367ms 934.179706ms 935.474117ms 939.339397ms 939.471146ms 942.093375ms 945.936006ms 947.164293ms 947.51543ms 948.71173ms 948.812974ms 950.081171ms 952.972582ms 954.23765ms 955.023534ms 955.061717ms 958.56054ms 959.148624ms 959.312113ms 964.994894ms 965.90037ms 966.103828ms 967.356114ms 971.67458ms 973.102785ms 977.967848ms 978.481986ms 978.989438ms 980.267613ms 984.159905ms 986.222888ms 988.5021ms 989.288301ms 990.050746ms 991.154464ms 992.06093ms 992.935238ms 997.541877ms 1.001260486s 1.00160437s 1.005719679s 1.006717649s 1.007692657s 1.007868802s 1.011541906s 1.012163081s 1.013561787s 1.018056456s 1.018065893s 1.018131351s 1.019739118s 1.023251889s 1.02382039s 1.024165206s 1.026945863s 1.031157779s 1.031523871s 1.036098379s 1.040821225s 1.040889739s 1.046964913s 1.048889025s 1.048903105s 1.050049085s 1.05013859s 1.050237063s 1.05165779s 1.053549666s 1.055054344s 1.055778854s 1.055793071s 1.056670166s 1.058674148s 1.058966224s 1.059638625s 1.06240497s 1.06249107s 1.066821922s 1.06717326s 1.067204481s 1.067871794s 1.068188471s 1.073435053s 1.074788862s 1.076267439s 1.076700844s 1.077682518s 1.077840009s 1.080247568s 1.085210329s 1.090723161s 1.091354914s 1.094475156s 1.096670776s 1.103840061s 1.104740004s 1.111304024s 1.115822368s 1.117695856s 1.121620011s 1.125987651s 1.126198225s 1.133022277s 1.134886313s 1.139170594s 1.139424877s 1.14389415s 1.144010293s 1.147213399s 1.149248839s 1.150783115s 1.151493097s] May 8 14:16:47.990: INFO: 50 %ile: 952.972582ms May 8 14:16:47.990: INFO: 90 %ile: 1.094475156s May 8 14:16:47.990: INFO: 99 %ile: 1.150783115s May 8 14:16:47.990: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:16:47.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4913" for this suite. May 8 14:17:16.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:17:16.087: INFO: namespace svc-latency-4913 deletion completed in 28.078344824s • [SLOW TEST:45.387 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:17:16.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e360cd97-81aa-4d91-b6d0-86cd2c356874 STEP: Creating secret with name s-test-opt-upd-75d0726f-5551-4a99-8e59-8b9dba5ce45a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e360cd97-81aa-4d91-b6d0-86cd2c356874 STEP: Updating secret s-test-opt-upd-75d0726f-5551-4a99-8e59-8b9dba5ce45a STEP: Creating secret with name s-test-opt-create-d4af86e2-154d-4e90-8f78-082cc6a64f4e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:17:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2133" for this suite. May 8 14:17:48.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:17:48.459: INFO: namespace secrets-2133 deletion completed in 22.093030928s • [SLOW TEST:32.371 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:17:48.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-a006a9cc-cd6c-4f69-b6d2-782e176fe1d8 in namespace container-probe-7480 May 8 14:17:52.544: INFO: Started pod test-webserver-a006a9cc-cd6c-4f69-b6d2-782e176fe1d8 in namespace container-probe-7480 STEP: checking the pod's current state and verifying that restartCount is present May 8 14:17:52.548: INFO: Initial restart count of pod test-webserver-a006a9cc-cd6c-4f69-b6d2-782e176fe1d8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:21:53.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7480" for this suite. May 8 14:21:59.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:21:59.302: INFO: namespace container-probe-7480 deletion completed in 6.093726209s • [SLOW TEST:250.843 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:21:59.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:21:59.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5" in namespace "projected-8509" to be "success or failure" May 8 14:21:59.398: INFO: Pod "downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.49538ms May 8 14:22:01.402: INFO: Pod "downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024380967s May 8 14:22:03.407: INFO: Pod "downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028870312s STEP: Saw pod success May 8 14:22:03.407: INFO: Pod "downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5" satisfied condition "success or failure" May 8 14:22:03.410: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5 container client-container: STEP: delete the pod May 8 14:22:03.431: INFO: Waiting for pod downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5 to disappear May 8 14:22:03.435: INFO: Pod downwardapi-volume-9cedefc0-19ce-44e6-a066-54be6d171cb5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:22:03.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8509" for this suite. May 8 14:22:09.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:22:09.548: INFO: namespace projected-8509 deletion completed in 6.109037789s • [SLOW TEST:10.245 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:22:09.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 8 14:22:09.627: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 14:22:09.632: INFO: Waiting for terminating namespaces to be deleted... May 8 14:22:09.634: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 8 14:22:09.641: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:22:09.641: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:22:09.641: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:22:09.641: INFO: Container kindnet-cni ready: true, restart count 0 May 8 14:22:09.641: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 8 14:22:09.648: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 8 14:22:09.648: INFO: Container coredns ready: true, restart count 0 May 8 14:22:09.648: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 8 14:22:09.648: INFO: Container coredns ready: true, restart count 0 May 8 14:22:09.648: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 8 14:22:09.648: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:22:09.648: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 8 14:22:09.648: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d138aa15d58bb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:22:10.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1369" for this suite. May 8 14:22:16.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:22:16.828: INFO: namespace sched-pred-1369 deletion completed in 6.156950289s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.280 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:22:16.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:22:16.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456" in namespace "downward-api-6470" to be "success or failure" May 8 14:22:16.925: INFO: Pod "downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027301ms May 8 14:22:18.928: INFO: Pod "downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007835038s May 8 14:22:21.052: INFO: Pod "downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13118917s STEP: Saw pod success May 8 14:22:21.052: INFO: Pod "downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456" satisfied condition "success or failure" May 8 14:22:21.054: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456 container client-container: STEP: delete the pod May 8 14:22:21.113: INFO: Waiting for pod downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456 to disappear May 8 14:22:21.237: INFO: Pod downwardapi-volume-7c6579b9-e5ed-49df-8fe4-efff13b0a456 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:22:21.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6470" for this suite. May 8 14:22:27.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:22:27.351: INFO: namespace downward-api-6470 deletion completed in 6.110329802s • [SLOW TEST:10.523 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:22:27.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:22:31.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8845" for this suite. May 8 14:23:13.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:23:13.611: INFO: namespace kubelet-test-8845 deletion completed in 42.097342342s • [SLOW TEST:46.259 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:23:13.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:23:13.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1" in namespace "projected-7307" to be "success or failure" May 8 14:23:13.716: INFO: Pod "downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495415ms May 8 14:23:15.720: INFO: Pod "downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01214364s May 8 14:23:17.724: INFO: Pod "downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016033795s STEP: Saw pod success May 8 14:23:17.724: INFO: Pod "downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1" satisfied condition "success or failure" May 8 14:23:17.727: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1 container client-container: STEP: delete the pod May 8 14:23:17.776: INFO: Waiting for pod downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1 to disappear May 8 14:23:17.788: INFO: Pod downwardapi-volume-fce42def-769f-49f0-85e4-abc00e6316e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:23:17.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7307" for this suite. May 8 14:23:23.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:23:23.957: INFO: namespace projected-7307 deletion completed in 6.164900712s • [SLOW TEST:10.346 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:23:23.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-347a7ef4-2de4-4a6f-a98b-407496ddadb8 STEP: Creating configMap with name cm-test-opt-upd-74000e68-46a8-4c5a-93c7-7a21336df092 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-347a7ef4-2de4-4a6f-a98b-407496ddadb8 STEP: Updating configmap cm-test-opt-upd-74000e68-46a8-4c5a-93c7-7a21336df092 STEP: Creating configMap with name cm-test-opt-create-54afc1f0-63ea-47ec-bbb8-97967c21b9b8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:23:32.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-265" for this suite. May 8 14:23:54.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:23:54.266: INFO: namespace projected-265 deletion completed in 22.088880436s • [SLOW TEST:30.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:23:54.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c99f092-3ed9-41cc-8d60-bd054c5ccf48 STEP: Creating a pod to test consume secrets May 8 14:23:54.330: INFO: Waiting up to 5m0s for pod "pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85" in namespace "secrets-3424" to be "success or failure" May 8 14:23:54.371: INFO: Pod "pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85": Phase="Pending", Reason="", readiness=false. Elapsed: 40.700798ms May 8 14:23:56.375: INFO: Pod "pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045018624s May 8 14:23:58.379: INFO: Pod "pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048866946s STEP: Saw pod success May 8 14:23:58.379: INFO: Pod "pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85" satisfied condition "success or failure" May 8 14:23:58.381: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85 container secret-volume-test: STEP: delete the pod May 8 14:23:58.396: INFO: Waiting for pod pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85 to disappear May 8 14:23:58.400: INFO: Pod pod-secrets-6cb36340-4521-4d91-abed-3303f24ccb85 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:23:58.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3424" for this suite. May 8 14:24:04.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:24:04.492: INFO: namespace secrets-3424 deletion completed in 6.088883155s • [SLOW TEST:10.225 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:24:04.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 14:24:04.787: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4e5880ea-3334-47d0-8d2f-2b1e11b22857", Controller:(*bool)(0xc0030a0c3a), BlockOwnerDeletion:(*bool)(0xc0030a0c3b)}} May 8 14:24:04.942: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4d704a2b-5e93-4112-bbbb-181a33fc83ba", Controller:(*bool)(0xc002537192), BlockOwnerDeletion:(*bool)(0xc002537193)}} May 8 14:24:04.995: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1e2045ea-e989-4f6f-b9dc-0c2f79a5742c", Controller:(*bool)(0xc00253735a), BlockOwnerDeletion:(*bool)(0xc00253735b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:24:10.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9102" for this suite. May 8 14:24:16.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:24:16.129: INFO: namespace gc-9102 deletion completed in 6.085150183s • [SLOW TEST:11.637 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:24:16.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 14:24:16.225: INFO: Waiting up to 5m0s for pod "pod-84ec722b-d12d-45fc-893c-f2f7899cc223" in namespace "emptydir-938" to be "success or failure" May 8 14:24:16.229: INFO: Pod "pod-84ec722b-d12d-45fc-893c-f2f7899cc223": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798451ms May 8 14:24:18.233: INFO: Pod "pod-84ec722b-d12d-45fc-893c-f2f7899cc223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007813492s May 8 14:24:20.238: INFO: Pod "pod-84ec722b-d12d-45fc-893c-f2f7899cc223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012426217s STEP: Saw pod success May 8 14:24:20.238: INFO: Pod "pod-84ec722b-d12d-45fc-893c-f2f7899cc223" satisfied condition "success or failure" May 8 14:24:20.241: INFO: Trying to get logs from node iruya-worker pod pod-84ec722b-d12d-45fc-893c-f2f7899cc223 container test-container: STEP: delete the pod May 8 14:24:20.280: INFO: Waiting for pod pod-84ec722b-d12d-45fc-893c-f2f7899cc223 to disappear May 8 14:24:20.287: INFO: Pod pod-84ec722b-d12d-45fc-893c-f2f7899cc223 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:24:20.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-938" for this suite. May 8 14:24:26.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:24:26.392: INFO: namespace emptydir-938 deletion completed in 6.102412169s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:24:26.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 8 14:24:26.486: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 14:24:26.495: INFO: Waiting for terminating namespaces to be deleted... May 8 14:24:26.498: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 8 14:24:26.503: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:24:26.503: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:24:26.503: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:24:26.503: INFO: Container kindnet-cni ready: true, restart count 0 May 8 14:24:26.503: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 8 14:24:26.508: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 8 14:24:26.508: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:24:26.508: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 8 14:24:26.508: INFO: Container kindnet-cni ready: true, restart count 0 May 8 14:24:26.508: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 8 14:24:26.508: INFO: Container coredns ready: true, restart count 0 May 8 14:24:26.508: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 8 14:24:26.508: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 8 14:24:26.576: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 8 14:24:26.576: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 8 14:24:26.576: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 8 14:24:26.576: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 8 14:24:26.576: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 8 14:24:26.576: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0e226d54-d084-4683-a864-17d95e0a6811.160d13aa867df553], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3085/filler-pod-0e226d54-d084-4683-a864-17d95e0a6811 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e226d54-d084-4683-a864-17d95e0a6811.160d13ab23c654ef], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e226d54-d084-4683-a864-17d95e0a6811.160d13ab6b2b8b10], Reason = [Created], Message = [Created container filler-pod-0e226d54-d084-4683-a864-17d95e0a6811] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e226d54-d084-4683-a864-17d95e0a6811.160d13ab7b4d62cb], Reason = [Started], Message = [Started container filler-pod-0e226d54-d084-4683-a864-17d95e0a6811] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9800358-4df0-4edd-955c-721586aaccf5.160d13aa85bc5166], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3085/filler-pod-e9800358-4df0-4edd-955c-721586aaccf5 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9800358-4df0-4edd-955c-721586aaccf5.160d13aad8cac657], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9800358-4df0-4edd-955c-721586aaccf5.160d13ab41b04561], Reason = [Created], Message = [Created container filler-pod-e9800358-4df0-4edd-955c-721586aaccf5] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9800358-4df0-4edd-955c-721586aaccf5.160d13ab5dcaa278], Reason = [Started], Message = [Started container filler-pod-e9800358-4df0-4edd-955c-721586aaccf5] STEP: Considering event: Type = [Warning], Name = [additional-pod.160d13abed8eac03], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:24:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3085" for this suite. May 8 14:24:40.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:24:40.213: INFO: namespace sched-pred-3085 deletion completed in 6.419083907s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.820 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:24:40.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-61d5dbe8-644b-4701-9032-4e39caa29aaf STEP: Creating a pod to test consume configMaps May 8 14:24:40.309: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca" in namespace "projected-6467" to be "success or failure" May 8 14:24:40.313: INFO: Pod "pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738689ms May 8 14:24:42.317: INFO: Pod "pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00793857s May 8 14:24:44.322: INFO: Pod "pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012554598s STEP: Saw pod success May 8 14:24:44.322: INFO: Pod "pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca" satisfied condition "success or failure" May 8 14:24:44.325: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca container projected-configmap-volume-test: STEP: delete the pod May 8 14:24:44.344: INFO: Waiting for pod pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca to disappear May 8 14:24:44.373: INFO: Pod pod-projected-configmaps-3f9b21cb-944e-491d-8e31-0377bc8eddca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:24:44.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6467" for this suite. May 8 14:24:50.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:24:50.483: INFO: namespace projected-6467 deletion completed in 6.105883884s • [SLOW TEST:10.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:24:50.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 8 14:24:50.532: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 14:24:50.578: INFO: Waiting for terminating namespaces to be deleted... May 8 14:24:50.580: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 8 14:24:50.585: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:24:50.585: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:24:50.585: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 8 14:24:50.585: INFO: Container kindnet-cni ready: true, restart count 0 May 8 14:24:50.585: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 8 14:24:50.591: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 8 14:24:50.591: INFO: Container kindnet-cni ready: true, restart count 0 May 8 14:24:50.591: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 8 14:24:50.591: INFO: Container kube-proxy ready: true, restart count 0 May 8 14:24:50.591: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 8 14:24:50.591: INFO: Container coredns ready: true, restart count 0 May 8 14:24:50.591: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 8 14:24:50.591: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7433da4c-40d3-4529-bdab-9bbe598931a5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7433da4c-40d3-4529-bdab-9bbe598931a5 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7433da4c-40d3-4529-bdab-9bbe598931a5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:24:58.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9367" for this suite. May 8 14:25:16.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:25:16.859: INFO: namespace sched-pred-9367 deletion completed in 18.090076788s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.376 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:25:16.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 8 14:25:16.987: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:25:24.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7497" for this suite. May 8 14:25:30.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:25:30.666: INFO: namespace init-container-7497 deletion completed in 6.098450467s • [SLOW TEST:13.806 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:25:30.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 8 14:25:35.305: INFO: Successfully updated pod "pod-update-6fb02b74-13ce-4a77-a578-e76231ed0818" STEP: verifying the updated pod is in kubernetes May 8 14:25:35.317: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:25:35.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8203" for this suite. May 8 14:25:57.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:25:57.399: INFO: namespace pods-8203 deletion completed in 22.078292834s • [SLOW TEST:26.731 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:25:57.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:25:57.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e" in namespace "downward-api-9008" to be "success or failure" May 8 14:25:57.482: INFO: Pod "downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.98238ms May 8 14:25:59.486: INFO: Pod "downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833763s May 8 14:26:01.490: INFO: Pod "downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012110119s STEP: Saw pod success May 8 14:26:01.490: INFO: Pod "downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e" satisfied condition "success or failure" May 8 14:26:01.492: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e container client-container: STEP: delete the pod May 8 14:26:01.507: INFO: Waiting for pod downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e to disappear May 8 14:26:01.512: INFO: Pod downwardapi-volume-188c2caa-9627-4591-9934-8a226ec7022e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:26:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9008" for this suite. May 8 14:26:07.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:26:07.599: INFO: namespace downward-api-9008 deletion completed in 6.084557579s • [SLOW TEST:10.199 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:26:07.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 8 14:26:12.731: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:26:13.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-386" for this suite. May 8 14:26:35.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:26:35.935: INFO: namespace replicaset-386 deletion completed in 22.162447182s • [SLOW TEST:28.334 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:26:35.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9aaa6d81-59d1-4555-bd54-b5ef45de97e0 STEP: Creating a pod to test consume configMaps May 8 14:26:36.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3" in namespace "configmap-9853" to be "success or failure" May 8 14:26:36.109: INFO: Pod "pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 70.658512ms May 8 14:26:38.163: INFO: Pod "pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124349432s May 8 14:26:40.172: INFO: Pod "pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133364233s STEP: Saw pod success May 8 14:26:40.172: INFO: Pod "pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3" satisfied condition "success or failure" May 8 14:26:40.175: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3 container configmap-volume-test: STEP: delete the pod May 8 14:26:40.215: INFO: Waiting for pod pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3 to disappear May 8 14:26:40.231: INFO: Pod pod-configmaps-a3f7dd4b-3666-4b36-b7be-3fcec1e82ce3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:26:40.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9853" for this suite. May 8 14:26:46.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:26:46.366: INFO: namespace configmap-9853 deletion completed in 6.130628091s • [SLOW TEST:10.430 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:26:46.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 8 14:26:46.450: INFO: Waiting up to 5m0s for pod "pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3" in namespace "emptydir-551" to be "success or failure" May 8 14:26:46.453: INFO: Pod "pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240137ms May 8 14:26:48.457: INFO: Pod "pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007414396s May 8 14:26:50.462: INFO: Pod "pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011449055s STEP: Saw pod success May 8 14:26:50.462: INFO: Pod "pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3" satisfied condition "success or failure" May 8 14:26:50.465: INFO: Trying to get logs from node iruya-worker pod pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3 container test-container: STEP: delete the pod May 8 14:26:50.499: INFO: Waiting for pod pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3 to disappear May 8 14:26:50.519: INFO: Pod pod-5175e4c5-2f80-4134-91aa-5fe6df18d9c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:26:50.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-551" for this suite. May 8 14:26:56.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:26:56.629: INFO: namespace emptydir-551 deletion completed in 6.10584182s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:26:56.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 14:27:04.747: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:04.772: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:06.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:06.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:08.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:08.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:10.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:10.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:12.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:12.782: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:14.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:14.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:16.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:16.777: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:18.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:18.777: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:20.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:20.777: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:22.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:22.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:24.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:24.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:26.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:26.777: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:28.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:28.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:30.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:30.776: INFO: Pod pod-with-prestop-exec-hook still exists May 8 14:27:32.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 14:27:32.776: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:27:32.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8351" for this suite. May 8 14:27:54.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:27:54.908: INFO: namespace container-lifecycle-hook-8351 deletion completed in 22.119342971s • [SLOW TEST:58.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:27:54.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 8 14:27:55.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726604,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 14:27:55.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726605,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 8 14:27:55.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726606,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 8 14:28:05.181: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726627,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 14:28:05.182: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726629,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 8 14:28:05.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1455,SelfLink:/api/v1/namespaces/watch-1455/configmaps/e2e-watch-test-label-changed,UID:89614f5b-887d-45de-9142-bd421d6d9268,ResourceVersion:9726630,Generation:0,CreationTimestamp:2020-05-08 14:27:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:28:05.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1455" for this suite. May 8 14:28:11.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:28:11.307: INFO: namespace watch-1455 deletion completed in 6.122064227s • [SLOW TEST:16.400 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:28:11.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-525/configmap-test-c8f84ec2-4c8c-4278-bafb-e86bc6d92860 STEP: Creating a pod to test consume configMaps May 8 14:28:11.397: INFO: Waiting up to 5m0s for pod "pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46" in namespace "configmap-525" to be "success or failure" May 8 14:28:11.452: INFO: Pod "pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46": Phase="Pending", Reason="", readiness=false. Elapsed: 54.388194ms May 8 14:28:13.456: INFO: Pod "pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058669152s May 8 14:28:15.460: INFO: Pod "pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062916804s STEP: Saw pod success May 8 14:28:15.460: INFO: Pod "pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46" satisfied condition "success or failure" May 8 14:28:15.464: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46 container env-test: STEP: delete the pod May 8 14:28:15.553: INFO: Waiting for pod pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46 to disappear May 8 14:28:15.691: INFO: Pod pod-configmaps-67f01940-27da-4a32-a65e-2f856e637e46 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:28:15.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-525" for this suite. May 8 14:28:21.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:28:21.933: INFO: namespace configmap-525 deletion completed in 6.237117841s • [SLOW TEST:10.625 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:28:21.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-a3d0b96d-d66a-4fcc-8cad-7c91b4f9f08d STEP: Creating secret with name secret-projected-all-test-volume-3d8d45af-f429-4feb-9d18-dffdf8c65b79 STEP: Creating a pod to test Check all projections for projected volume plugin May 8 14:28:22.057: INFO: Waiting up to 5m0s for pod "projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b" in namespace "projected-8165" to be "success or failure" May 8 14:28:22.078: INFO: Pod "projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.223153ms May 8 14:28:24.082: INFO: Pod "projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025266473s May 8 14:28:26.087: INFO: Pod "projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029667413s STEP: Saw pod success May 8 14:28:26.087: INFO: Pod "projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b" satisfied condition "success or failure" May 8 14:28:26.090: INFO: Trying to get logs from node iruya-worker pod projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b container projected-all-volume-test: STEP: delete the pod May 8 14:28:26.144: INFO: Waiting for pod projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b to disappear May 8 14:28:26.164: INFO: Pod projected-volume-e52b58b2-e395-4cf7-ba62-105083a6066b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:28:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8165" for this suite. May 8 14:28:32.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:28:32.266: INFO: namespace projected-8165 deletion completed in 6.098849028s • [SLOW TEST:10.333 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:28:32.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 8 14:28:32.350: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7689,SelfLink:/api/v1/namespaces/watch-7689/configmaps/e2e-watch-test-watch-closed,UID:2739eb42-3e4c-431e-a4b2-224d3b0e2486,ResourceVersion:9726743,Generation:0,CreationTimestamp:2020-05-08 14:28:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 8 14:28:32.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7689,SelfLink:/api/v1/namespaces/watch-7689/configmaps/e2e-watch-test-watch-closed,UID:2739eb42-3e4c-431e-a4b2-224d3b0e2486,ResourceVersion:9726744,Generation:0,CreationTimestamp:2020-05-08 14:28:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 8 14:28:32.361: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7689,SelfLink:/api/v1/namespaces/watch-7689/configmaps/e2e-watch-test-watch-closed,UID:2739eb42-3e4c-431e-a4b2-224d3b0e2486,ResourceVersion:9726745,Generation:0,CreationTimestamp:2020-05-08 14:28:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 14:28:32.361: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7689,SelfLink:/api/v1/namespaces/watch-7689/configmaps/e2e-watch-test-watch-closed,UID:2739eb42-3e4c-431e-a4b2-224d3b0e2486,ResourceVersion:9726746,Generation:0,CreationTimestamp:2020-05-08 14:28:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:28:32.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7689" for this suite. May 8 14:28:38.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:28:38.480: INFO: namespace watch-7689 deletion completed in 6.113910059s • [SLOW TEST:6.213 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:28:38.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 8 14:28:38.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5473' May 8 14:28:41.409: INFO: stderr: "" May 8 14:28:41.409: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 14:28:41.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5473' May 8 14:28:41.619: INFO: stderr: "" May 8 14:28:41.619: INFO: stdout: "update-demo-nautilus-rq7b9 update-demo-nautilus-xcrhr " May 8 14:28:41.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rq7b9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5473' May 8 14:28:41.715: INFO: stderr: "" May 8 14:28:41.715: INFO: stdout: "" May 8 14:28:41.715: INFO: update-demo-nautilus-rq7b9 is created but not running May 8 14:28:46.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5473' May 8 14:28:46.829: INFO: stderr: "" May 8 14:28:46.829: INFO: stdout: "update-demo-nautilus-rq7b9 update-demo-nautilus-xcrhr " May 8 14:28:46.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rq7b9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5473' May 8 14:28:46.975: INFO: stderr: "" May 8 14:28:46.975: INFO: stdout: "true" May 8 14:28:46.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rq7b9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5473' May 8 14:28:47.061: INFO: stderr: "" May 8 14:28:47.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 14:28:47.061: INFO: validating pod update-demo-nautilus-rq7b9 May 8 14:28:47.065: INFO: got data: { "image": "nautilus.jpg" } May 8 14:28:47.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 14:28:47.065: INFO: update-demo-nautilus-rq7b9 is verified up and running May 8 14:28:47.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xcrhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5473' May 8 14:28:47.164: INFO: stderr: "" May 8 14:28:47.164: INFO: stdout: "true" May 8 14:28:47.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xcrhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5473' May 8 14:28:47.260: INFO: stderr: "" May 8 14:28:47.260: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 14:28:47.260: INFO: validating pod update-demo-nautilus-xcrhr May 8 14:28:47.264: INFO: got data: { "image": "nautilus.jpg" } May 8 14:28:47.264: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 14:28:47.264: INFO: update-demo-nautilus-xcrhr is verified up and running STEP: using delete to clean up resources May 8 14:28:47.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5473' May 8 14:28:47.390: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:28:47.390: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 14:28:47.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5473' May 8 14:28:47.488: INFO: stderr: "No resources found.\n" May 8 14:28:47.489: INFO: stdout: "" May 8 14:28:47.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5473 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 14:28:47.621: INFO: stderr: "" May 8 14:28:47.621: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:28:47.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5473" for this suite. May 8 14:29:09.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:29:09.720: INFO: namespace kubectl-5473 deletion completed in 22.094437873s • [SLOW TEST:31.240 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:29:09.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:29:09.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea" in namespace "projected-6150" to be "success or failure" May 8 14:29:09.804: INFO: Pod "downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea": Phase="Pending", Reason="", readiness=false. Elapsed: 1.976036ms May 8 14:29:11.896: INFO: Pod "downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09413773s May 8 14:29:13.900: INFO: Pod "downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098404511s STEP: Saw pod success May 8 14:29:13.900: INFO: Pod "downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea" satisfied condition "success or failure" May 8 14:29:13.903: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea container client-container: STEP: delete the pod May 8 14:29:13.963: INFO: Waiting for pod downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea to disappear May 8 14:29:13.971: INFO: Pod downwardapi-volume-c049724e-d7ea-4c18-9542-a350fe48aeea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:29:13.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6150" for this suite. May 8 14:29:20.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:29:20.076: INFO: namespace projected-6150 deletion completed in 6.1022546s • [SLOW TEST:10.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:29:20.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:29:20.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1" in namespace "projected-348" to be "success or failure" May 8 14:29:20.181: INFO: Pod "downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.352952ms May 8 14:29:22.185: INFO: Pod "downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019613974s May 8 14:29:24.189: INFO: Pod "downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023074374s STEP: Saw pod success May 8 14:29:24.189: INFO: Pod "downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1" satisfied condition "success or failure" May 8 14:29:24.191: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1 container client-container: STEP: delete the pod May 8 14:29:24.329: INFO: Waiting for pod downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1 to disappear May 8 14:29:24.384: INFO: Pod downwardapi-volume-518e8448-674f-4273-a28d-3a5c87431eb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:29:24.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-348" for this suite. May 8 14:29:30.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:29:30.507: INFO: namespace projected-348 deletion completed in 6.119114696s • [SLOW TEST:10.430 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:29:30.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 8 14:29:35.137: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-926 pod-service-account-1c316fc8-0595-43db-a78c-bc3ff11f0c6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 8 14:29:35.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-926 pod-service-account-1c316fc8-0595-43db-a78c-bc3ff11f0c6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 8 14:29:35.568: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-926 pod-service-account-1c316fc8-0595-43db-a78c-bc3ff11f0c6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:29:35.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-926" for this suite. May 8 14:29:41.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:29:41.859: INFO: namespace svcaccounts-926 deletion completed in 6.099706707s • [SLOW TEST:11.352 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:29:41.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 8 14:29:41.951: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 8 14:29:41.996: INFO: Pod name sample-pod: Found 0 pods out of 1 May 8 14:29:47.001: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 8 14:29:47.001: INFO: Creating deployment "test-rolling-update-deployment" May 8 14:29:47.005: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 8 14:29:47.031: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 8 14:29:49.040: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 8 14:29:49.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724544987, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724544987, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724544987, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724544987, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 14:29:51.047: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 8 14:29:51.057: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8687,SelfLink:/apis/apps/v1/namespaces/deployment-8687/deployments/test-rolling-update-deployment,UID:e0982295-796b-4db1-aaef-26be75f77a0f,ResourceVersion:9727074,Generation:1,CreationTimestamp:2020-05-08 14:29:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-08 14:29:47 +0000 UTC 2020-05-08 14:29:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-08 14:29:50 +0000 UTC 2020-05-08 14:29:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 8 14:29:51.059: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8687,SelfLink:/apis/apps/v1/namespaces/deployment-8687/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:5099b857-0c14-4fce-975e-94ff6ce7707f,ResourceVersion:9727063,Generation:1,CreationTimestamp:2020-05-08 14:29:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e0982295-796b-4db1-aaef-26be75f77a0f 0xc002132ad7 0xc002132ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 8 14:29:51.059: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 8 14:29:51.059: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8687,SelfLink:/apis/apps/v1/namespaces/deployment-8687/replicasets/test-rolling-update-controller,UID:15eb1b72-bb0b-4de3-a44a-95f0a8575024,ResourceVersion:9727073,Generation:2,CreationTimestamp:2020-05-08 14:29:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e0982295-796b-4db1-aaef-26be75f77a0f 0xc0021329ef 0xc002132a00}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 8 14:29:51.062: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-gc8np" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-gc8np,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8687,SelfLink:/api/v1/namespaces/deployment-8687/pods/test-rolling-update-deployment-79f6b9d75c-gc8np,UID:acff2f69-d744-467f-9c11-0e14875d415a,ResourceVersion:9727062,Generation:0,CreationTimestamp:2020-05-08 14:29:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 5099b857-0c14-4fce-975e-94ff6ce7707f 0xc0021333e7 0xc0021333e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zd62h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zd62h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zd62h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002133460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002133480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 14:29:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 14:29:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 14:29:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 14:29:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.102,StartTime:2020-05-08 14:29:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-08 14:29:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://19e476bc028b24812b52b50a5fe637a10bc7075d4424ddfe8d29630f7816ca97}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:29:51.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8687" for this suite. May 8 14:29:57.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:29:57.254: INFO: namespace deployment-8687 deletion completed in 6.190096199s • [SLOW TEST:15.395 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:29:57.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 8 14:29:57.563: INFO: Waiting up to 5m0s for pod "client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c" in namespace "containers-2748" to be "success or failure" May 8 14:29:57.571: INFO: Pod "client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.777326ms May 8 14:29:59.575: INFO: Pod "client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012220891s May 8 14:30:01.603: INFO: Pod "client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040778892s STEP: Saw pod success May 8 14:30:01.603: INFO: Pod "client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c" satisfied condition "success or failure" May 8 14:30:01.606: INFO: Trying to get logs from node iruya-worker pod client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c container test-container: STEP: delete the pod May 8 14:30:01.633: INFO: Waiting for pod client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c to disappear May 8 14:30:01.670: INFO: Pod client-containers-81a675c0-bd33-4df8-b073-3aa6ba27723c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:30:01.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2748" for this suite. May 8 14:30:07.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:30:07.801: INFO: namespace containers-2748 deletion completed in 6.128215057s • [SLOW TEST:10.547 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:30:07.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-1f060f82-5982-41c8-8b7d-42f8bc65185d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:30:07.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8653" for this suite. May 8 14:30:13.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:30:14.044: INFO: namespace secrets-8653 deletion completed in 6.118655755s • [SLOW TEST:6.242 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:30:14.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 8 14:30:18.287: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 8 14:30:33.395: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:30:33.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2225" for this suite. May 8 14:30:39.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:30:39.496: INFO: namespace pods-2225 deletion completed in 6.092606559s • [SLOW TEST:25.452 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:30:39.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-0821020c-6143-4581-b464-564a6adc2e74 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0821020c-6143-4581-b464-564a6adc2e74 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:30:45.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9212" for this suite. May 8 14:31:07.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:31:07.778: INFO: namespace configmap-9212 deletion completed in 22.114762333s • [SLOW TEST:28.281 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:31:07.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:31:07.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94" in namespace "downward-api-7745" to be "success or failure" May 8 14:31:07.915: INFO: Pod "downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344018ms May 8 14:31:09.920: INFO: Pod "downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008271738s May 8 14:31:11.924: INFO: Pod "downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01261811s STEP: Saw pod success May 8 14:31:11.924: INFO: Pod "downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94" satisfied condition "success or failure" May 8 14:31:11.928: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94 container client-container: STEP: delete the pod May 8 14:31:11.966: INFO: Waiting for pod downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94 to disappear May 8 14:31:12.023: INFO: Pod downwardapi-volume-4f65d1e9-6ef6-4af4-b153-24a7ff319d94 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:31:12.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7745" for this suite. May 8 14:31:18.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:31:18.118: INFO: namespace downward-api-7745 deletion completed in 6.091452676s • [SLOW TEST:10.339 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:31:18.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7746.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7746.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 14:31:24.292: INFO: DNS probes using dns-7746/dns-test-407cf419-0f4c-4f41-a9da-5479777030aa succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:31:24.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7746" for this suite. May 8 14:31:30.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:31:30.533: INFO: namespace dns-7746 deletion completed in 6.20483235s • [SLOW TEST:12.415 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:31:30.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 8 14:31:30.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4985' May 8 14:31:30.675: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 8 14:31:30.675: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 8 14:31:34.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4985' May 8 14:31:34.829: INFO: stderr: "" May 8 14:31:34.829: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:31:34.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4985" for this suite. May 8 14:31:56.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:31:56.922: INFO: namespace kubectl-4985 deletion completed in 22.089473726s • [SLOW TEST:26.389 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:31:56.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-3d807fea-eb02-41fb-88b1-b838830c7717 STEP: Creating a pod to test consume secrets May 8 14:31:57.064: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae" in namespace "projected-215" to be "success or failure" May 8 14:31:57.125: INFO: Pod "pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae": Phase="Pending", Reason="", readiness=false. Elapsed: 61.324642ms May 8 14:31:59.129: INFO: Pod "pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065384553s May 8 14:32:01.133: INFO: Pod "pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06942414s STEP: Saw pod success May 8 14:32:01.133: INFO: Pod "pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae" satisfied condition "success or failure" May 8 14:32:01.136: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae container secret-volume-test: STEP: delete the pod May 8 14:32:01.188: INFO: Waiting for pod pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae to disappear May 8 14:32:01.196: INFO: Pod pod-projected-secrets-81927677-0343-4f87-8b9a-a8736f44caae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:32:01.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-215" for this suite. May 8 14:32:07.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:32:07.386: INFO: namespace projected-215 deletion completed in 6.186911034s • [SLOW TEST:10.464 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:32:07.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 14:32:07.576: INFO: Waiting up to 5m0s for pod "pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4" in namespace "emptydir-5618" to be "success or failure" May 8 14:32:07.586: INFO: Pod "pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.676591ms May 8 14:32:09.590: INFO: Pod "pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013672796s May 8 14:32:11.595: INFO: Pod "pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018431499s STEP: Saw pod success May 8 14:32:11.595: INFO: Pod "pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4" satisfied condition "success or failure" May 8 14:32:11.598: INFO: Trying to get logs from node iruya-worker pod pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4 container test-container: STEP: delete the pod May 8 14:32:11.751: INFO: Waiting for pod pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4 to disappear May 8 14:32:11.760: INFO: Pod pod-c4c8f724-09e0-40e5-aa74-ddd974da93e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:32:11.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5618" for this suite. May 8 14:32:17.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:32:17.862: INFO: namespace emptydir-5618 deletion completed in 6.097687107s • [SLOW TEST:10.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:32:17.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce May 8 14:32:17.953: INFO: Pod name my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce: Found 0 pods out of 1 May 8 14:32:22.958: INFO: Pod name my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce: Found 1 pods out of 1 May 8 14:32:22.958: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce" are running May 8 14:32:22.962: INFO: Pod "my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce-vwl8s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 14:32:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 14:32:21 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 14:32:21 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 14:32:17 +0000 UTC Reason: Message:}]) May 8 14:32:22.962: INFO: Trying to dial the pod May 8 14:32:27.974: INFO: Controller my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce: Got expected result from replica 1 [my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce-vwl8s]: "my-hostname-basic-df614f3b-a561-4c25-8aa9-5b59c0d8e8ce-vwl8s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:32:27.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3054" for this suite. May 8 14:32:34.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:32:34.080: INFO: namespace replication-controller-3054 deletion completed in 6.10227298s • [SLOW TEST:16.217 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:32:34.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d723fd88-155a-47e5-ba53-9ac05c732024 STEP: Creating a pod to test consume configMaps May 8 14:32:34.235: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44" in namespace "projected-2996" to be "success or failure" May 8 14:32:34.239: INFO: Pod "pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622798ms May 8 14:32:36.247: INFO: Pod "pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011560384s May 8 14:32:38.251: INFO: Pod "pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016224233s STEP: Saw pod success May 8 14:32:38.252: INFO: Pod "pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44" satisfied condition "success or failure" May 8 14:32:38.255: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44 container projected-configmap-volume-test: STEP: delete the pod May 8 14:32:38.283: INFO: Waiting for pod pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44 to disappear May 8 14:32:38.293: INFO: Pod pod-projected-configmaps-7cc6e3b1-c7f3-4876-9e48-8e740ab8bc44 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:32:38.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2996" for this suite. May 8 14:32:44.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:32:44.509: INFO: namespace projected-2996 deletion completed in 6.154515047s • [SLOW TEST:10.428 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:32:44.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 8 14:32:44.625: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 8 14:32:44.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:44.940: INFO: stderr: "" May 8 14:32:44.940: INFO: stdout: "service/redis-slave created\n" May 8 14:32:44.940: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 8 14:32:44.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:45.278: INFO: stderr: "" May 8 14:32:45.278: INFO: stdout: "service/redis-master created\n" May 8 14:32:45.278: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 8 14:32:45.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:45.578: INFO: stderr: "" May 8 14:32:45.578: INFO: stdout: "service/frontend created\n" May 8 14:32:45.578: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 8 14:32:45.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:45.847: INFO: stderr: "" May 8 14:32:45.847: INFO: stdout: "deployment.apps/frontend created\n" May 8 14:32:45.847: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 8 14:32:45.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:46.181: INFO: stderr: "" May 8 14:32:46.181: INFO: stdout: "deployment.apps/redis-master created\n" May 8 14:32:46.181: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 8 14:32:46.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8146' May 8 14:32:46.537: INFO: stderr: "" May 8 14:32:46.537: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 8 14:32:46.537: INFO: Waiting for all frontend pods to be Running. May 8 14:32:56.588: INFO: Waiting for frontend to serve content. May 8 14:32:56.621: INFO: Trying to add a new entry to the guestbook. May 8 14:32:56.634: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 8 14:32:56.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:56.783: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:56.783: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 8 14:32:56.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:57.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:57.011: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 8 14:32:57.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:57.203: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:57.203: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 14:32:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:57.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:57.318: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 8 14:32:57.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:57.426: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:57.426: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 8 14:32:57.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8146' May 8 14:32:57.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 14:32:57.549: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:32:57.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8146" for this suite. May 8 14:33:35.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:33:35.678: INFO: namespace kubectl-8146 deletion completed in 38.104589959s • [SLOW TEST:51.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:33:35.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-651ff945-ca01-4d40-9205-a7637d9964f6 STEP: Creating a pod to test consume configMaps May 8 14:33:35.772: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a" in namespace "projected-6808" to be "success or failure" May 8 14:33:35.794: INFO: Pod "pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.697871ms May 8 14:33:37.797: INFO: Pod "pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025074138s May 8 14:33:39.801: INFO: Pod "pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028674102s STEP: Saw pod success May 8 14:33:39.801: INFO: Pod "pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a" satisfied condition "success or failure" May 8 14:33:39.803: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a container projected-configmap-volume-test: STEP: delete the pod May 8 14:33:39.855: INFO: Waiting for pod pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a to disappear May 8 14:33:39.861: INFO: Pod pod-projected-configmaps-f44ed5c6-f207-4f78-a397-a5208ea2f25a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:33:39.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6808" for this suite. May 8 14:33:45.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:33:45.990: INFO: namespace projected-6808 deletion completed in 6.126719279s • [SLOW TEST:10.311 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:33:45.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 8 14:33:46.123: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-407,SelfLink:/api/v1/namespaces/watch-407/configmaps/e2e-watch-test-resource-version,UID:db24a9bb-f26a-498d-9c9f-da309d608ce2,ResourceVersion:9728076,Generation:0,CreationTimestamp:2020-05-08 14:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 8 14:33:46.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-407,SelfLink:/api/v1/namespaces/watch-407/configmaps/e2e-watch-test-resource-version,UID:db24a9bb-f26a-498d-9c9f-da309d608ce2,ResourceVersion:9728077,Generation:0,CreationTimestamp:2020-05-08 14:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:33:46.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-407" for this suite. May 8 14:33:52.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:33:52.210: INFO: namespace watch-407 deletion completed in 6.083617861s • [SLOW TEST:6.220 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:33:52.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:33:56.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7657" for this suite. May 8 14:34:02.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:34:02.512: INFO: namespace emptydir-wrapper-7657 deletion completed in 6.155796698s • [SLOW TEST:10.301 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:34:02.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 8 14:34:07.127: INFO: Successfully updated pod "labelsupdatece7c5de5-b8a6-4efe-9213-101ff07a270c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:34:09.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6108" for this suite. May 8 14:34:31.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:34:31.234: INFO: namespace projected-6108 deletion completed in 22.082309687s • [SLOW TEST:28.721 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:34:31.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 8 14:34:31.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b" in namespace "projected-661" to be "success or failure" May 8 14:34:31.336: INFO: Pod "downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.115482ms May 8 14:34:33.341: INFO: Pod "downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01594482s May 8 14:34:35.345: INFO: Pod "downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0203713s STEP: Saw pod success May 8 14:34:35.345: INFO: Pod "downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b" satisfied condition "success or failure" May 8 14:34:35.348: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b container client-container: STEP: delete the pod May 8 14:34:35.491: INFO: Waiting for pod downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b to disappear May 8 14:34:35.513: INFO: Pod downwardapi-volume-e3fd4974-bda4-436a-914a-cec93ed9537b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:34:35.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-661" for this suite. May 8 14:34:41.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:34:41.764: INFO: namespace projected-661 deletion completed in 6.214152412s • [SLOW TEST:10.529 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:34:41.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-363.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-363.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-363.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-363.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-363.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-363.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 14:34:47.882: INFO: DNS probes using dns-363/dns-test-ac43e932-373b-4e33-a1e8-244c8149269b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:34:47.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-363" for this suite. May 8 14:34:53.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:34:54.073: INFO: namespace dns-363 deletion completed in 6.129101242s • [SLOW TEST:12.309 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:34:54.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-6bfc66cf-ff1c-4dde-a8cd-62e8311247f0 STEP: Creating a pod to test consume configMaps May 8 14:34:54.161: INFO: Waiting up to 5m0s for pod "pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f" in namespace "configmap-5395" to be "success or failure" May 8 14:34:54.168: INFO: Pod "pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85546ms May 8 14:34:56.225: INFO: Pod "pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063633683s May 8 14:34:58.230: INFO: Pod "pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06842803s STEP: Saw pod success May 8 14:34:58.230: INFO: Pod "pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f" satisfied condition "success or failure" May 8 14:34:58.233: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f container configmap-volume-test: STEP: delete the pod May 8 14:34:58.268: INFO: Waiting for pod pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f to disappear May 8 14:34:58.276: INFO: Pod pod-configmaps-c99f622c-edf3-4f66-a474-ebafd289264f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:34:58.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5395" for this suite. May 8 14:35:04.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:35:04.401: INFO: namespace configmap-5395 deletion completed in 6.12235664s • [SLOW TEST:10.328 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:35:04.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-b3cf5675-9275-4b78-ae25-204b427101c2 in namespace container-probe-8632 May 8 14:35:08.483: INFO: Started pod busybox-b3cf5675-9275-4b78-ae25-204b427101c2 in namespace container-probe-8632 STEP: checking the pod's current state and verifying that restartCount is present May 8 14:35:08.486: INFO: Initial restart count of pod busybox-b3cf5675-9275-4b78-ae25-204b427101c2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:39:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8632" for this suite. May 8 14:39:15.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:39:15.650: INFO: namespace container-probe-8632 deletion completed in 6.152006262s • [SLOW TEST:251.248 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:39:15.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 8 14:39:15.734: INFO: Waiting up to 5m0s for pod "downward-api-01598539-1d69-4599-923d-b866346a9604" in namespace "downward-api-6331" to be "success or failure" May 8 14:39:15.737: INFO: Pod "downward-api-01598539-1d69-4599-923d-b866346a9604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.968558ms May 8 14:39:17.741: INFO: Pod "downward-api-01598539-1d69-4599-923d-b866346a9604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006470057s May 8 14:39:19.745: INFO: Pod "downward-api-01598539-1d69-4599-923d-b866346a9604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010409985s STEP: Saw pod success May 8 14:39:19.745: INFO: Pod "downward-api-01598539-1d69-4599-923d-b866346a9604" satisfied condition "success or failure" May 8 14:39:19.747: INFO: Trying to get logs from node iruya-worker2 pod downward-api-01598539-1d69-4599-923d-b866346a9604 container dapi-container: STEP: delete the pod May 8 14:39:19.768: INFO: Waiting for pod downward-api-01598539-1d69-4599-923d-b866346a9604 to disappear May 8 14:39:19.773: INFO: Pod downward-api-01598539-1d69-4599-923d-b866346a9604 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:39:19.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6331" for this suite. May 8 14:39:25.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:39:25.875: INFO: namespace downward-api-6331 deletion completed in 6.098885625s • [SLOW TEST:10.224 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 8 14:39:25.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 8 14:39:25.978: INFO: Waiting up to 5m0s for pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e" in namespace "var-expansion-3170" to be "success or failure" May 8 14:39:25.991: INFO: Pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.70042ms May 8 14:39:27.996: INFO: Pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01756546s May 8 14:39:30.001: INFO: Pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.022210039s May 8 14:39:32.008: INFO: Pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029453981s STEP: Saw pod success May 8 14:39:32.008: INFO: Pod "var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e" satisfied condition "success or failure" May 8 14:39:32.010: INFO: Trying to get logs from node iruya-worker pod var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e container dapi-container: STEP: delete the pod May 8 14:39:32.035: INFO: Waiting for pod var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e to disappear May 8 14:39:32.039: INFO: Pod var-expansion-7940c737-8491-45c7-b943-5b87bc2a4c6e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 8 14:39:32.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3170" for this suite. May 8 14:39:38.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 8 14:39:38.131: INFO: namespace var-expansion-3170 deletion completed in 6.089730853s • [SLOW TEST:12.256 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSMay 8 14:39:38.131: INFO: Running AfterSuite actions on all nodes May 8 14:39:38.131: INFO: Running AfterSuite actions on node 1 May 8 14:39:38.131: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6233.930 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS